Academic paper written by chatbot argues AI increases possibility of plaigarism

by ian

Ian Patrick, FISM News

An academic paper released in January of this year is making the case that AI chatbots are dangerous when it comes to “academic honesty.” Ironically enough, the paper itself was written by a chatbot.

Titled “Chatting and Cheating: Ensuring academic integrity in the era of ChatGPT,” the paper acknowledges that there may be some benefits to the use of AI chatbots in education. These benefits can include “increased student engagement, collaboration, and accessibility.”

“However, these tools also raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism,” the paper continues.

AI essay-writing systems are designed to generate essays based on a set of parameters or prompts. This means that students could potentially use these systems to cheat on their assignments by submitting essays that are not their own work (e.g. Dehouche 2021).

“It can be difficult to distinguish between a student’s own writing and the responses generated by a chatbot application,” the AI-generated paper claims.

Debby Cotton, director of academic practice at Plymouth Marjon University, was one of the so-called “authors” of this paper. Cotton told The Guardian that she and the other fake authors “wanted to show that ChatGPT is writing at a very high level.”

“The technology is improving very fast and it’s going to be difficult for universities to outrun it,” Cotton added.

Thomas Lancaster, a computer scientist and expert on contract cheating at Imperial College London, noted that educators are growing fearful of this technology.

“If all we have in front of us is a written document, it is incredibly tough to prove it has been written by a machine because the standard of writing is often good,” Lancaster said.

However, he did add that a more “specialized” course may prove harder for AI to write as well.

“I don’t think it could write your whole dissertation,” Lancaster said.

Some colleges are trying to find ways to implement safeguards to help professors identify false papers.