Key points:
Ninety-two percent of students now rely on AI in some form, and 88 percent have used generative AI for assignments, according to a new survey. Students’ AI usage can range from summarizing content to full-scale writing support, which begs the question: What can educators do if they suspect an assignment is authored by AI?
There’s no denying that AI can enhance learning and digital literacy, but some usage raises important questions about ethics and its impact on academic outcomes.
The following scenario is becoming more common for educators: You’re grading assignments, reading them one-by-one, until one of them catches your eye. You can’t place a finger on exactly what, but it doesn’t sound like your student. It sounds like AI, so you run a check with an AI detector tool, and you get the result : 99 percent AI. What do you do?
Understand what the score means
AI content detector tools are trained to pick up on signs that text is written by LLMs like ChatGPT, Gemini, and DeepSeek. But each tool arrives at its conclusion in a different way, and the features available to users can vary widely.
First, ensure you are using reliable detector technology that is backed by research. As new AI detection tools emerge, others that have been on the market longer may be less reliable.
Second, understand the result. If the whole text (or a segment of text) gets a 99 percent AI score, that doesn’t necessarily mean the entire text was AI-generated. Rather, the tool is 99 percent confident that AI was used to generate some portion of the text.
Talk to your student
I always recommend the simple action of talking to your student.
You could ask about their writing process to try to get a sense of how well they know their own submission. Or you could simply ask if they used AI. They may admit it–they were swamped and had to choose an assignment on which to take a shortcut. Or, they wrote a first draft and weren’t happy with the result, so they asked ChatGPT to improve it.
This is a great opportunity to discuss what is and isn’t a violation of academic integrity. You can remind your student how they should handle an issue like this in the future. Should they ask for an extension? Or just turn in that bad pre-AI first draft?
Check for misunderstandings
Sometimes there’s a mismatch between what a teacher considers cheating, what a student considers cheating, and what triggers an AI detector. Here are some common ways to use AI in a way that may trigger AI detection:
- Grammar checkers like Grammarly that incorporate AI assistance in the writing process
- Translation tools–these are often built on LLMs
- Google Docs AI features like “Help me write”
- Talking to ChatGPT for brainstorm and research, and reusing phrases written by AI
- Using ChatGPT for wording advice
I recommend using an AI policy like this tier system to ensure that students and teachers are on the same page with regards to which assistive tools are allowed. This prevents misunderstandings where a teacher allows Grammarly, not realizing that Grammarly is a full AI writing assistant now, but also uses an AI detector, which would flag any students using Grammarly’s AI features.
Look at writing process artifacts
Say your student admitted to using some phrasing from ChatGPT. Or perhaps they claim that their case is a rare false positive. The best next step to clear their name and confirm that they did the work is to look at writing process artifacts. What research did they do for this assignment, and did they take notes? Do they have early drafts saved?
If they worked in Google Docs, select File -> Version history -> See version history to see a full history of their writing process. It will be clear if they just copied from ChatGPT and pasted into the file, or if they typed it in one go from top to bottom (a sign that they had AI assistance but wanted to fake the writing process). If they have a robust multi-hour writing history, then that’s some very compelling evidence that they wrote the work themselves.
Consider the stakes
Derek Newton, author of the academic integrity newsletter The Cheat Sheet, often compares AI detectors to metal detectors. When you walk through a metal detector and it goes off, you don’t immediately get arrested and sent to prison. Instead, they investigate further.
Similarly, AI detection is a great way to flag assignments, but detection warrants further investigation before any punitive measures. A nonzero false positive rate means that any positive detection could be real, or it could be the statistically anomalous situation where a reliable detector gets it wrong.
If the student has evidence of their writing process, I would be inclined to believe them. At the worst case, they learn their lesson to not use AI assistance, even lightly.
If the student has a history of their work being detected by AI, that should also be considered. They may get the benefit of the doubt once, but the more times this happens, the clearer it becomes that there is an issue.
Hopefully this is a helpful guide to anyone navigating the nuances of AI plagiarism. It’s a difficult situation to be in, which is why it’s important to have the tools and information to handle a case like this when it comes up.
This post is exclusively published on eduexpertisehub.com
Source link