I recently attended a meeting for parents of primary schoolchildren in which the discussion veered, unsurprisingly, towards the effect of artificial intelligence (AI) on education. The speaker was not a doom-monger and, while acknowledging the challenges that ChatGPT brings to current pedagogy and means of assessment, suggested that education itself needs to adapt to the new technologies.

“Before ChatGPT, we used to teach students how to write. Now that ChatGPT can do this for them, we need to instead teach them how to ask ChatGPT questions to produce desirable writing,” they said.

I sympathise with some aspects of this position. Generative AI is here to stay and, as long as it is accessible to students, many will use it. Policing, while needed to maintain fairness in assessment, should not be the primary concern for educators. Education is not about the students on one side and the educators hunting for their misdemeanours on the other.

Still, I am troubled by the speaker’s words, particularly the claim that once ChatGPT can write well, then we should teach students something else.

One underlying assumption is that ChatGPT always produces good writing – something that is debatable considering its penchant for inaccuracy. However, even if this not insurmountable issue were to be overcome, I find the suggestion that all that matters in writing is the finished product to be dishearteningly instrumentalist.

Writing is not just the replication of an already existing model to produce texts in expected forms. It also involves reading, engaging critically with material, developing a perspective on different topics, finding the best ways to write and reflecting on your own thoughts while writing about them. If taught well, writing does not only produce texts but changes the writers themselves in some way.

Writing might make writers more reflective human beings, give them an outlet for self-expression and help them understand their social context and the world around them better. It can also sharpen their discernment in terms of evaluating and using sources that they encounter and this, in turn, can make them stronger, critical thinkers.

To delegate the work of writing to AI that ‘can do it better’ is, therefore, not to simply adjust ourselves to new technologies but to also risk delegating modes of thinking that not only make us human but are essential for societal well-being. It is true that our thinking is almost always mediated by technology.

As the speaker at this event rightly pointed out: “No one would question the usefulness of calculators today.” But the fact that a machine can do something quicker than us does not mean we have to stop doing it if we care about what we would consequently forfeit.

There is another issue that is rarely mentioned. In focusing on the output of this text generator – how accurate or how human-like it is – or the input – how do we ask it the right questions – we tend to forget the way it actually produces text. Using numerical probability, ChatGPT is a Transformer Neural Network that strings words together on the basis of the likelihood that such words are typically strung together.

The fact that a machine can do something quicker than us does not mean we have to stop doing it if we care about what we would consequently forfeit- Mario Aquilina

In order to simulate human-like unique responses, the output varies so that the most likely word to follow is not always chosen. One key difference from previous text generators, which have existed at least since Theo Lutz’s 1959 Stochastische Texte (A Text Generator), is that human beings have been helping ChatGPT learn what is and is not a desirable output in various ways, including through the use of human trainers providing feedback on the quality of responses and through the development of a reward model that allows the transformer to establish what output is most likely to get high rewards.

The implications are important. As the data scientist, Omar Trejo, writes: “It’s simply not acceptable to write AI off as a fool proof black box that outputs sage advice. In reality, AI can be as flawed as its creators, leading to negative outcomes in the real world for real people.” AI is subject to bias because it ultimately learns to produce text by using data that is fed to it and that data might be biased.

But even if these issues are solved, we have to ask ourselves if we want to consider what we would lose if we were to uncritically delegate writing to AI. For example, what would be the potential effects on political thought and political discourse? And what about ethics?

Do we want AI text generators to become our new philosophers? Do we want them to become our poets, our novelists and our teachers? Do we want them to determine what we read and make decisions about the importance of some words over others for us?

I am not a Luddite. I have been researching electronic literature for a decade and I am aware of the exciting potential of human-computer interactions.

I am also aware that AI has uses in industry and other contexts. But celebrating the productive value of AI technology without reflecting on its political, ethical and also cognitive or emotional implications is a mistake.

I hope that discussions about it in our education system will address these issues critically and seriously. Whether students might cheat by using ChatGPT is, in my view, far less important than how AI technology might make them different human beings.

Mario Aquilina is an academic at the University of Malta.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.