Contract cheating – where commercial cheating services provide assignments for university students – has become a global problem.
Australia is not immune. According to the latest data, record numbers of Australian students are paying someone else to do their assessments.
This comes amid broader concerns about rising levels of cheating during COVID.
Last week, the University of New South Wales said it was detecting more than double the amount of cheating among its students post COVID. Before the pandemic, just under 2% of students were caught in misconduct processes each year. Now it is close to 4.5%.
“It’s really taken off during the pandemic,” Deputy Vice-Chancellor George Williams told Radio National.
This isn’t just problem for individual universities. It threatens the integrity and reputation of a university degree and the whole higher education system.
Our research suggests the way to address this is to revert to more traditional ways of holding exams.
Harsh penalties are not working
There are harsh penalties for cheating if a student is caught. They can been expelled from their course or even have their degree revoked.
However, these deterrents are not working. Research in 2021 showed one in ten students either pay someone to write their essays or use content they find that was not written by them. Other studies show up to 95% of cases go undetected.
If assignments and many exams are done online or at home, this provides new opportunities to collude with other students. Or to pay a cheating service to do it.
Students can also use artificial intelligence tools to write essays which prevent plagiarism software from picking this up.
Meanwhile, academic staff are already overworked and may not have the time or capacity to detect and report misconduct cases.
The issue, of course, has been made worse by the increased use of online assessments during COVID.
Our research looked at how 47 academics working in computing courses were upholding academic integrity during COVID and the move online.
We focused on bachelors degree and coursework masters degrees across 41 Australian universities.
Our interviewees told us that pre-pandemic, the majority of final exams were done in person and were monitored by academic staff. During COVID, many assessments moved online and simply could not be supervised.
As one interviewee told us,
There was a lot more cheating, both plagiarism and collusion […] students are cheating in way that they were not able to cheat with paper, supervised exams.
we would release the exam at 8am […] and about 20 minutes later the questions were appearing on the contract cheating sites […] we did think of limiting the time they had available to do the exam, but clearly, the internet moves faster than we do.
The random interview approach
Interviewees told us how post-exam interviews were used way to try and detect and prevent cheating during online assessments.
In these interviews (also called vivas) academics can check whether an exam was completed by the appropriate student and that they worked by themselves.
Before an exam, students were warned they might be required to do an interview after the exam. They might be selected randomly or might be chosen because of suspicions raised by their exam answers.
But as one interviewee explained, even this wasn’t enough to stop cheating – “the thought of a viva didn’t stop them”.
Our research suggests universities should strongly consider going back to the past and holding exams in person. As one interviewee noted:
We haven’t come up with an answer as to how to do assured assessment online […] all of the solutions that we’ve tried for online invigilation [monitoring] have problems of one kind or another.
Another academic was more blunt:
you cannot ensure academic integrity in online assessment.
Why we need old fashioned methods
There is huge interest in moving university life online post-COVID, as the sector moves to make learning as flexible as possible.
Some universities in our study are considering moving entirely to online exams. This obviously presents ongoing integrity issues. And it suggests we may be employing and trusting qualified experts who have not earned those qualifications.
But rather than fancier technology or harsher penalties, our research suggests we need to be reverting to more traditional methods of assessing students.
This means traditional face-to-face exams, with student identity card checks, arranged seating, and exam rooms monitored by staff.
This will be less flexible for students, particularly for those who are still overseas or who still need to practice social distancing. But it remains a tried and trusted method of ensuring students are doing their own exams.
The author would like to acknowledge the team members who worked on this research: Sander Leemans, Queensland University of Technology, Regina Berretta, University of Newcastle, Ayse Bilgin, Macquarie University, Trina Myers, Queensland University of Technology, Judy Sheard, Monash University, Simon, formerly of the University of Newcastle and Lakmali Herath Jayarathna, Central Queensland University.