Beyond Cheating: Rethinking Assessment in the Age of AI
Conversations surrounding student use of artificial intelligence to cheat on exams are distracting from a wider discussion about how to improve assessment, according to a prominent academic.
Phillip Dawson, co-director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia, argues that the emphasis on cheating is misplaced. “Validity matters more than cheating,” he stated, adding that “cheating and AI have really taken over the assessment debate.”
Speaking at the U.K.’s Quality Assurance Agency conference, Dawson explained, “Cheating and all that matters. But assessing what we mean to assess is the thing that matters the most. That’s really what validity is … We need to address it, but cheating is not necessarily the most useful frame.”
Dawson’s comments followed the release of a survey by the Higher Education Policy Institute, which revealed that 88 percent of U.K. undergraduates had utilized AI tools in some form when completing assessments.
He agreed with the report’s suggestion that universities should adopt a “nuanced policy” acknowledging that AI use by students is inevitable, and recognize that AI tools “can genuinely aid learning and productivity.”
Dawson contends that “assessment needs to change … in a world where AI can do the things that we used to assess.”
He suggested that referencing, or citing sources, may be one area where AI can be effectively utilized. “I don’t know how to do referencing by hand, and I don’t care … We need to take that same sort of lens to what we do now and really be honest with ourselves: What’s busywork? Can we allow students to use AI for their busywork to do the cognitive offloading? Let’s not allow them to do it for what’s intrinsic, though.”
Dawson dismissed what he called “discursive” measures to limit AI use as a “fantasy land.” These measures include lecturers providing guidelines on when and how AI use is permitted. Instead, he argued for “structural changes” to assessments.
“Discursive changes are not the way to go. You can’t address this problem of AI purely through talk. You need action. You need structural changes to assessment [and not just a] traffic light system that tells students, ‘This is an orange task, so you can use AI to edit but not to write.”
He believes that, without supervision, it is impossible to prevent AI use. “We have no way of stopping people from using AI if we aren’t in some way supervising them; we need to accept that,” he said. “We can’t pretend some sort of guidance to students is going to be effective at securing assessments. Because if you aren’t supervising, you can’t be sure how AI was or wasn’t used.”
Dawson outlined three possible outcomes for the impact of AI on grades: grade inflation, where standards remain the same as student abilities increase; norm referencing, where students are graded based on performance compared to others; and, his preferred option, “standards inflation,” where standards are perpetually raised as the capabilities of AI and students improve.
Ultimately, Dawson believes that AI’s impact on assessment is fundamental saying, “The times of assessing what people know are gone.”