AssessmentEducation

Stop Chasing Cheaters. Start Building Curiosity.

AI-Proof Assessment Design: Why Engagement Beats Detection in Education

iStock / Getty Images Plus

AI-proof assessment design is becoming essential as traditional detection methods fail. Colleges buy AI detection software. Students find AI humanizer tools. Colleges add browser lockdown. Students use their phones. Colleges implement eye movement tracking in exams. Students find new workaround tools. We’re spending millions on surveillance technology while cheating rates keep climbing. What if we’re solving the wrong problem?

Here’s what we know: 92% of students now use AI tools like ChatGPT in their studies, and 88% use them specifically for assessments. In UK universities alone, nearly 7,000 students were caught cheating with AI in the 2023-24 academic year, triple the previous year’s number. And that’s just the ones who got caught. One university study found that 94% of AI-written submissions went completely undetected. This is why AI-proof assessment design matters more than surveillance.

The Surveillance Trap

So what’s the response been? More surveillance, eye-tracking software, cameras monitoring every movement, keystroke analysis, and browser lockdowns. And yes, AI detection tools.

The problem is they aren’t working.

Despite 68% of teachers now using AI detection tools, student discipline rates for AI-related issues jumped from 48% in 2022-23 to 64% in 2024-25. The cheating is increasing faster than our ability to catch it.

For every detection method we create, someone develops a workaround. There are entire apps now: Comet browsers, Cluely, and dozens more, specifically designed to help students cheat on proctored exams. It’s an arms race we can’t win.

But what if we’re fighting the wrong battle?

A Different Question

Instead of asking “How do we catch cheaters?”, what if we asked “How do we make students not want to cheat in the first place?”

I know, I know. It sounds idealistic. But the research actually backs this up.

When students find learning meaningful, most behavioral problems, including cheating, disappear on their own. A meta-analysis of inquiry-based learning found it improves student outcomes significantly, not because it’s harder to cheat on, but because students actually get engaged in the material.

Think about it from a student’s perspective. If you’re genuinely curious about a question, genuinely invested in finding an answer, why would you hand that discovery over to AI? The whole point becomes the journey, not just getting the right answer to submit.

What Makes Assessments Engaging?

So what does this look like in practice? After looking at research from the past few years and talking to teachers who’ve experimented with this approach, a few patterns emerge.

  • Break it into stages. A single big assignment is easy to outsource. But what about: brainstorm three topics, submit a proposal with your reasoning, share a draft for peer review, reflect on the feedback you got, then submit your final version? Each stage builds on the last. Each requires you to engage with what happened before. AI can help, sure, but it can’t replace the journey.
  • Ask for the thinking, not just the answer. “What approaches did you consider? Why did you choose this one? What would change if we modified this assumption?” These questions reveal whether someone actually wrestled with the problem or merely copy-pasted a solution.
  • Make them show their AI use. Here’s something interesting: ask students to submit their AI conversation history alongside their work. What questions did they ask? Were they thoughtful (“How do different economic theories explain this policy outcome?”) or lazy (“Write me an essay on economic policy”)? The quality of questions students ask AI reveals as much about their thinking as the final answer. Students who ask precise, context-specific questions are doing real intellectual work. Those who ask AI to do everything aren’t. Institutes like IIT Delhi are already doing this.
  • Add a human element. Present your research to classmates and answer their questions. Participate in peer review where you have to give meaningful feedback on someone else’s work. AI can generate content, but it can’t help you think on your feet when someone asks a follow-up question you didn’t anticipate.

A teacher told me about switching from traditional research papers to podcast assignments where students had to interview each other about their topics. Cheating dropped to nearly zero, not because it was impossible, but because the students found it more interesting to actually do the work.

The Honest Challenges of AI-proof Assessments

Let’s be real: this approach takes more work. At least initially.

Designing these kinds of assessments requires thought. Grading them takes time. Students need adjustment too; they are used to questions with clear right answers, not open-ended exploration.

And it’s not foolproof. Some students will still try to game the system. Recent research admits there’s minimal hard evidence that authentic assessment completely prevents cheating.

But here’s the thing: it doesn’t have to be perfect. It just has to be better than what we’re doing now.

Teachers who have made the switch report that after the initial learning curve, it often takes less time than the old approach. Peer review reduces grading burden. When students are more engaged, you spend less time policing and more time teaching.

You don’t have to redesign your entire curriculum overnight. Pick one assignment. Add one element that requires genuine engagement, maybe a reflective component, maybe peer feedback, maybe a presentation with Q&A. Or try asking students to document their AI interactions and explain their questioning strategy. These engagement-based assessment strategies shift the focus from catching cheaters to making learning genuinely compelling.

See what happens. Adjust. Try again.

Talk to your students about it. Ask them: What feels like genuine help? What feels like cheating? Where’s the line? You might be surprised by their honesty.

The Bigger Picture

Look, AI isn’t going anywhere. It’s going to keep getting better, faster, and more sophisticated. We can spend the next decade in an increasingly expensive arms race, adding more surveillance, more detection software, and more proctoring.

Or we can ask what education is actually for.

If the goal is to transfer information from one brain to another, AI wins. It’s faster, more efficient, and always available. But if the goal is to develop thinking, to spark curiosity, to help students become people who can grapple with complex problems, that’s still fundamentally human.

The student who’s genuinely curious about a problem, who wants to understand the systems at play, who gets excited about proposing solutions, that student doesn’t need to cheat. They’re already getting what education is supposed to provide: the tools and confidence to think.

We can’t stop AI from existing. But we can stop designing assessments that make AI use the obviously easier choice. We can start designing assessments where doing the actual thinking is more interesting than outsourcing it. Effective AI-proof assessment design creates situations where doing the actual thinking is more interesting than outsourcing it.

Some colleges are already making this shift. They’re moving from single-point assessments to evolving case studies where students explore problems individually first, then collaborate in groups to refine their thinking. The format itself changes the game: students use AI as a thinking partner in early stages, but the real work happens in defending ideas, integrating perspectives, and adapting to new information.

At Edwisely, we’ve built this approach into our platform because we’ve seen it work. When assessment design prioritizes engagement over enforcement, students choose thinking over shortcuts.

Maybe that’s the answer we’ve been looking for.

Related posts
CareerEducationProduct

Case-based Learning: When Classrooms Think Like the Real World

Education

#MakeTeachingCoolAgain: Redefine Teaching for Gen Z and Gen Alpha

Education

The Homecoming: What Shifting Immigration Policies Mean for Indian Higher Education

Education

 7AI Learning Model: Personalization in Action

Leave a Reply

Your email address will not be published. Required fields are marked *