Artificial Intelligence, Academic Misconduct, and the Borg Why GPT-3 Text Generation in the Higher Education Classroom is Becoming Scary

4Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

Abstract

We explore playfully the capacity of an artificial intelligence text generation engine called GPT-3 to produce credible academic texts. Departing from a concern raised by colleagues about the possibility of using GPT-3 to cheat in academia, particularly at the undergraduate level, we interact with the GPT-3 interface as nerdy novices to learn what it could produce. The outputs from the GPT-3 text generation engine are incredible, at times surprising, and often terrible. We point to ways in which GPT-3 might be used by students to produce written work and reasons why most instructors, most of the time, could see through what GPT-3 has produced (at least for now). In our experiments, we learn that GPT-3 can be a productive collaborator in paper design but wonder if this is ethical. In short, while fun and somewhat addictive to experiment with, we must pay attention to the potential ways that AI text generation may begin to appear in the anthropology classroom.

Cite

CITATION STYLE

APA

McIlwraith, T., Finnis, E., & Jones, S. (2023). Artificial Intelligence, Academic Misconduct, and the Borg Why GPT-3 Text Generation in the Higher Education Classroom is Becoming Scary. Anthropologica, 65(1). https://doi.org/10.18357/ANTHROPOLOGICA65120232166

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free