In the old days, a writing machine was always human. It was Isaac Asimov, who wrote almost 500 books in his lifetime and who could bang away 800 words in 10 minutes on his typewriter. It was Georges Simenon, who could write a novel in nine days, and John Creasey, who wrote them in six (though Creasey then redrafted, whereas Simenon just took an extra three days to strike out adjectives and make sure the names in the last chapter matched those in the first).

They were ‘machines’ because of their speed. But their output had an irreducible human element.

Creasey redrafted paying close attention to feedback he got from editorial readers, especially to improve the authenticity of his women characters.

No one could evoke atmosphere and smells like Simenon, or characters on the verge of tragic breakdown. He wrote fast because he could hardly bear the fictional world he had to inhabit.

As for Asimov, for all the stripped-down, functional character of his prose, he always wrote to please a particular audience. He loved playing the part of a ‘village explainer’, when presenting scientific ideas, and the loveable rascal when compiling jokes or limericks. His science , but they were experiments that excited readers.

Those were the old days. Today, the writing machine is powered by artificial intelligence – essentially an enhanced chatbot – and can produce blog content or a student essay in minutes. Should we be concerned? And about what?

Those questions have hit the news, in Malta and the UK, because of the academic implications. What if students submit AI-generated essays, which can’t be caught by anti-plagiarism software?

I’ve fed ChatGPT – the chatbot making the headlines – some of the essay titles I give my students. Nothing I got would deserve more than a C+ – a boon for struggling students but a trap for ambitious ones.

The answers tap the resources of the internet and go for the lowest common denominator. They are too bland and generic. The essays are bullet points of arguments, listed after each other (“on the one hand” and “on the other”). They stack up factors but don’t express relationships, let alone insights. They invariably conclude by saying the topic is complex and controversial.

You’ve come across this type of presentation before: in business PowerPoint presentations; in blog posts and online “listicles”; in talking points made by TV pundits or political candidates trying to sound “presidential”; and in international news agency reports.

I asked ChatGPT to write me an 850-word column on whether Malta is a mafia state (to which, by the way, my provisional answer is no). Here’s how it began:

“Malta has been making headlines in recent years for all the wrong reasons. The tiny Mediterranean island nation, known for its picturesque beaches and historical sites, has been plagued by a series of scandals that have called into question the integrity of its government and the rule of law.”

It concludes: “In conclusion, the question of whether Malta is a mafia state is a complex and controversial one. There is certainly evidence of corruption and organised crime in Malta and the fact that no one has yet been brought to justice for the murder of Daphne Caruana Galizia is deeply troubling. However, it’s also important to note that the government of Malta denies that the country is a mafia state and has taken steps to improve transparency and accountability.”

See what I mean by blandness? This is content that could have been dashed off by a tired reporter for a seven-minute cable news segment. It has little sense of context or nuance. It taps the received wisdom and spews it out using a rhetoric of balance.

Chatbots aren’t bicycles; they’re AI parrots feeding broad summaries of other people’s views- Ranier Fsadni

If it really were a column, you wouldn’t finish it. Although it would serve your purpose just fine if all you need to generate is conventional summary – CVs, reports or instructions – but no insight or entertainment.

Steve Jobs famously called the computer “a bicycle for the mind” – meaning that it would enhance human thought rather than replace it. Currently, chatbots aren’t bicycles; they’re AI parrots feeding broad summaries of other people’s views.

That doesn’t mean they pose no new challenge. In the immediate term, there remains the pressing matter of resolving how to be fair in academic grading.

In the longer term, we are not on uncharted territory. The computer revolution in chess points the way.

To address cheating, there have been significant changes in the rules of conduct surrounding games. However, computers have also revolutionised how players train and think.

Computers have made it easier for a talented player to emerge from areas cut off from centres of chess activity.

Rather than computers thinking like humans, we’ve seen humans think more like computers (chess aficionados talk of moves in terms of whether they look ‘human’).

It still takes talent to know how to use a computer well. It has become a skill to be learned.

Magnus Carlsen, the world no. 1, has developed a playing style based on taking advantage of players who over-rely on computer preparation. He uses computers to exploit human psychology and physical stamina.

Adapting this experience to writing contexts, we’d see AI used to help students learn to overcome the paralysis of facing a blank page but also how to distinguish insipid essays from insightful ones.

There would also be more time to focus on pre-essay forms of note-taking, which the most original thinkers – from Leonardo to Richard Feynman and Niklas Luhmann – credit for their insights.

Do that and we’d have gone back to the old ideals, when writing machines were human and computers were bicycles for the mind.

 

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.