The Ethics of OpenAI/ChatGPT


The OpenAI Playground and ChatGPT use GPT-3.5 to produce text using an AI language model that is capable of routinely producing texts that would appear to have been written by humans at a level of sophistication that would meet typical benchmarks for competence in those fields. Policy responses at universities currently speak to the capacity these tools have at present. But AI models for text-generation will keep improving, resulting in an arms race that educators cannot win. A further concern for many educators is that students who have greater familiarity with computers and the Internet might better be able to exploit these tools in formulating “better” generative commands, which would in turn further exacerbate the "digital divide" between students with historical advantages compared to others. While some universities are responding by increasing the number of assignments written in class or oral examinations, these potential solutions cannot be implemented in large classes, such as those with an enrolment of 900+ students as are common in South African universities. The range and severity of the possible consequences of OpenAI (and related tools) for teaching, learning and research is significant enough to merit reflection and response at the highest levels of decision-making, and this paper will offer reflections on possible responses to this challenge.

Author Information
Jacques Rousseau, University of Cape Town, South Africa

Paper Information
Conference: ECE2023
Stream: Educational policy

This paper is part of the ECE2023 Conference Proceedings (View)
Full Paper
View / Download the full paper in a new tab/window

To cite this article:
Rousseau J. (2023) The Ethics of OpenAI/ChatGPT ISSN: 2188-1162 The European Conference on Education 2023: Official Conference Proceedings
To link to this article:

Comments & Feedback

Place a comment using your LinkedIn profile


Share on activity feed

Powered by WP LinkPress

Share this Research

Posted by James Alexander Gordon