A Furman University student just failed a philosophy class for using ChatGPT in developing the final essay for the class on David Hume’s paradox of horror. Darren Hick, the student’s philosophy professor, told Synthedia in an email interview:
“I did not explicitly tell students they couldn’t use chatbot software to write their essays. I also don’t tell them explicitly that they can’t pay someone else to write their essays. Happily, such scenarios are covered by the definition of plagiarism in the student handbook.”
The Challenge for Education Institutions
Furman University is not alone in grappling with the impact of AI writing assistants. A group of leading Australian universities is beginning to address the rise of AI-based writing assistants proactively.
“‘Australia's top universities are updating integrity policies and redesigning exams to account for the risk students will use sophisticated artificial intelligence to cheat…’ accourding to The Oz. Sydney University apparently updated its academic integrity policy in November ‘to include content generated using artificial intelligence.’ The University of Queensland also mentions AI as one form of contract cheating in its policy to be implemented next year.”
Mike Sharples is an emeritus professor of educational technology and The Open University, UK. He wrote earlier that GPT-3 produced graduate-level essays as of May 2022. That was based on the Davinci GPT-3 model. He commented that GPT-3.5 was even better.
Joanna Stern from the Wall Street Journal decided to test out ChatGPT. She enrolled in an AP literature class for a day at a New Jersey high school to see whether an AI-powered writing assistant could help her get through class.
Her essay arguing that Ferris Bueller’s Day Off was an existentialist text with themes similar to Franz Kafka’s Metamorphosis received a 3 out of 6 on the AP grading rubric.
A key issue was the presence of errors. The essay attributed a quote to the wrong character and transposed the movie's setting with the book. The teacher also suggested there was not “a lot of vivid writing” in the essay. Interestingly, 3 out of 6 is a passing grade in an AP class. This still might be a warning to students that don’t plan to edit and revise their ChatGPT output.
The Rules
There are at least two policies in Furman’s online Academic Integrity policy (i.e. student handbook) that may be interpreted to forbid using an AI-based tool to assist with classwork.
You can see under the “Cheating” section that “representing someone else’s work as your own” may apply. The plagiarism policy referenced by professor Hick provides even more detail:
Representing someone else’s ideas, words, expressions, statements, pictures, graphs, organizational structure, etc., as your own without proper acknowledgment or citation. Please note that this applies to material drawn from any source, including the Internet. You should consult with your instructor about the proper citation format for Internet sources.
Copying word for word from another source without proper attribution.
Paraphrasing another’s written ideas and presenting them as one’s own.
Can you plagiarize an AI model? ChatGPT and other AI writers generally grant rights to their output to users. Much of this rests on how you define “someone’s” and “another’s” in the first and third bullet above. Granted, the key point here is attribution to a source. It appears using ChatGPT violates the spirit of the policy, even if there is some ambiguity.
The university was not able to tell me whether Grammarly or Wordtune are approved for use before publishing this post. However, I assume, for now, these tools are okay. They are not changing the idea of the content. Instead, they are modifying how the ideas are expressed. There is another clause under the term “Unacceptable Collaboration” that may address this with a slight change in wording.
Submitting as one’s own work the product of collaboration with another student or students.
Since Grammarly and Wordtune are not students and they are not changing the main ideas of the output, they are probably within the rules. You might liken them to calculators for writing. Where this may get confusing is when these solutions add text generation to complement the proofreading and editing features. Wordtune is backed by AI21 Labs, which has its own text generation model that could be incorporated into the product.
The Magic or Horror of ChatGPT
Some educators are interested in discovering how ChatGPT and other LLMs can be used constructively in the learning process. However, the big focus now is how to curb usage or catch users. We should expect that focus to persist.
Discovering how to integrate LLMs into education will take time and effort that few people will want to expend. That may turn out to be a disservice to students.
It is true that learning to write and express your own thoughts cogently is an important learning objective. LLMs could undermine some of that learning process if students outsource their thinking and writing entirely to AI. However, it is likely that AI-based writing assistants will be widely used, and students will benefit from knowing how to operate them and how to discern the limits of the technology. Forbidding them may result in students missing out on obtaining skills that will be important after they graduate. There are no easy answers here.
The Furman student’s assignment was an interesting element of this story. Hume’s paradox of horror is based on the seeming inconsistency of people enjoying horror stories and ideas that evoke disgust or fear. If these ideas posed an immediate risk, they would typically not be viewed as entertainment. Maybe the idea of cheating created an adrenaline rush for the student until the fear manifested into the reality of a failed class.