Generative AI is a noteworthy presence on college campuses. At least 50% of college students have adopted AI to write papers, generate ideas, or both. This amounts to millions of essays, scholarly articles, and research papers and counting. And that number isn’t going down. Recent reports show that 70% of teens have used at least one type of GenAI tool for schoolwork.
While some of this “adoption” might as well be called cheating, not all of GenAI’s help is about ill-intentioned shortcuts. Besides churning out an academic paper, AI can be used to research concepts, offer learning support, find page numbers for text evidence, or just brainstorm ideas.
Where is the line between getting some help and breaking academic rules? And can universities check for AI-generated content submitted by students?
Many Universities Use AI Detectors—Here’s Why
Generative AI came on the scene quickly and in a big way. With the release of easily accessible AI tools, universities suddenly needed to react to them and develop policies around their use. Higher education is not well-known for flexibility and quick pivots, but in the face of AI, colleges and universities reacted with uncharacteristic swiftness.
The ability to use artificial intelligence to craft academic work raises a host of considerations related to ethics, plagiarism, and academic integrity—issues that strike at the heart of education’s core values.
AI detection tools are part of many universities’ multifaceted approaches to addressing the use of AI in academic coursework, research and studies.
In order to build policies, permissions, restrictions and consequences around the use of generative AI, institutions first need to convince students that AI use can be detected.
How AI Writing Detection Works
Just as AI can generate language by using natural language processing (NLP) and large language models (LLM), it can also be used to recognize text that AI created.
Because AI language generation is heavily dependent on patterns, the language it produces often follows some predictable patterns, too. This can result in some predictable traits in machine-generated writing:
- Unnatural phrasing
- Odd word choice
- Repetitive sentence structure
- Overuse of words or phrasing
- Lack of nuance
- Lack of personal experience or opinion
AI detection tools like GPTZero, Turnitin, and Copyleaks use machine-learning algorithms to analyze large amounts of AI-generated text, identify tell-tale patterns, and distinguish between human writing and machine-generated writing.
Educators can use these tools to catch students using generative AI, and students can use them for a preemptive scan before turning in their work. If AI detection tools were 100% right all the time, that would be the end of the story. But it’s not.
Accuracy and Limitations
Machine-generated text isn’t perfect. So, it follows that AI-powered detection tools aren’t perfect, either. In 2023, ChatGPT pulled back its own AI-content detection tool due to issues with accuracy. But AI evolves quickly, and accuracy is constantly improving.
While many detectors are highly accurate, no system is foolproof. False positives are a common challenge in AI detection. When human writing is highly structured or formal or reflects the traits an AI detector is looking for, it can be flagged as AI-written.
The potential for false positives is why many universities use AI detectors as part of a larger system of checks and balances when investigating plagiarism and cheating. Academic dishonesty is a serious charge, and most colleges aren’t going to rely on a single tool for evidence but use it as part of a broader system of detection.
Regarding the use of AI detectors in academia, the University of Kansas puts it well, “The tool provides information, not an indictment.”
The Role of AI Humanizers
A new consideration is emerging in the world of AI-generated text and academia: the AI Humanizer. AI Humanizers work like detectors to spot tell-tale patterns in machine-generated content. But text humanizers take AI detection a step further—they rewrite or enhance the writing to sound more natural and human.
Many AI Humanizers make AI-generated text much more difficult to detect. This poses an additional level of concern for universities, as the same ethical and academic integrity concerns remain.
GPTHuman is an AI text humanizer that produces content that is guaranteed to bypass AI detection. The writing or rewriting of GPTHuman.ai transforms robotic machine-generated to sound natural, authentic and human-made.
University Policies and Consequences
Many universities have embraced the idea that generative AI is here to stay and requires increased vigilance and comprehensive campuswide policies and procdures.
For instance, institutions like Montclair State University and East Central College publish good, old-fashioned checklists to help faculty uncover AI content patterns in student writing.
AI detectors remain a powerful deterrent for students as institutions put policies in place about appropriate use, prohibited use, and consequences for misusing AI-generated content. Disciplinary actions often mirror those of other acts of academic dishonesty: a failing grade, being dropped from a class, or more severe penalties.
In addition, many universities are proactively addressing generative AI in the classroom. For instance, the Massachusetts Institute of Technology (MIT) reminds instructors that “the heart of teaching is undeniably human” and stresses the promotion of transparency, open dialogue and intrinsic motivation to succeed honestly.
Conclusion
Universities can—and do—use AI detection tools to uncover AI writing in student work. The detection isn’t foolproof, but is continuously improving. Because teaching and learning is a human endeavor, institutions are expanding efforts toward promoting academic integrity and honesty on campus.
Generative AI and AI Humanizer tools are powerful assets in a student’s toolbox, but they should be used mindfully and carefully, balanced with the hard work and critical thinking that education requires. Genuine, ethical scholarship is the key to academic success.
Sources:
https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work
https://guides.library.ttu.edu/artificialintelligencetools/detection
https://cte.ku.edu/careful-use-ai-detectors
https://teach.its.uiowa.edu/news/2024/09/case-against-ai-detectors
https://edintegrity.biomedcentral.com/articles/10.1007/s40979-023-00140-5
https://ldlprogram.web.illinois.edu/academic-integrity-statement