Cheating with AI: Australian Universities Implement New Rules

As advancements in technology continue to soar, universities around the world are being forced to adapt to the changing landscape of education. In Australia, universities have been making changes to the way they conduct exams and assessments, due to concerns that students are using artificial intelligence (AI) software to write essays.

The Group of Eight leading universities, which are the leading research-intensive universities in Australia, have said that they have revised how they will run assessments this year due to the emergent technology. The group’s deputy chief executive, Dr Matthew Brown, said that its institutions were “proactively tackling” AI through student education, staff training, redesigning assessments and targeted technological and other detection strategies.

ChatGPT, an AI software created by OpenAI, has been banned in New York’s public schools due to concerns over its potential for plagiarism. ChatGPT can generate text on any subject in response to a prompt or query and was launched in November. In London, one academic tested it against a 2022 exam question and said the AI’s answer was “coherent, comprehensive and sticks to the points, something students often fail to do”, adding he would have to “set a different kind of exam” or deprive students of internet access for future exams.

In Australia, academics have cited concerns over ChatGPT and similar technology’s ability to evade anti-plagiarism software while providing quick and credible academic writing. The University of Sydney’s latest academic integrity policy now specifically mentions “generating content using artificial intelligence” as a form of cheating. A spokesperson said while few instances of cheating had been observed, and cases were generally of a low standard, the university was preparing for change by redesigning assessments and improving detection strategies.

The Australian National University has changed assessment designs to rely on laboratory activities and fieldwork, will time exams and introduce more oral presentations.

Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales, said teachers were in “crisis meetings” about how exams would be marked in the new year and whether protocols were in place to deal with plagiarism. He said that “People are already using it to submit essays,” and that “We should’ve been aware this was coming … and we do tend to sleepwalk into the future”.

Walsh said with more advanced programs arriving – including GPT-4 from OpenAI – simply banning the platform was unrealistic. “It’s a technical fail – there’s a thing called VPN, and it misses the point,” he said. “There are technical solutions – digital watermarking, but you can just run another program over it to destroy the watermark. It’s an arms race that’s never going to finish, and you’re never going to win.”

Despite the challenges that AI poses in the educational sector, Professor Walsh believes it has great potential for innovation and streamlining. “Teachers hate marking essays, and with suitable prompts it can be used to mark and provide feedback teachers wouldn’t have the time or patience to,” he said. “We don’t want to destroy literacy, but did calculators destroy numeracy?”

Flinders University was one of the first in Australia to implement a specific policy against computer-generated cheating. Its deputy vice-chancellor, Prof Romy L. Lopatko, said that “it is important that students understand that using AI-generated work is academic misconduct and will not be tolerated.

While AI technology has great potential for innovation and advancement, universities must ensure that it is used ethically and not to cheat in exams and assessments. Universities are continuing to adapt to this new technology and are finding ways to ensure that the integrity of their academic assessments is maintained, while also preparing students for the future.

Leave a Reply

Your email address will not be published. Required fields are marked *