At the age of 23, Daniel Dennett set out on what he described as a “wildly ambitious” task to show that anything a mind can do, a machine can do, for his Doctor of Philosophy at the University of Oxford in 1965.

That young man went on to become an influential philosopher and cognitive scientist, who, right up to the time he died last April at the age of 82, remained as passionate about the nature of human consciousness as he had been as a young student setting out to challenge long-held theories.

His curious mind never stopped questioning and, before he died, he was vocal about the grave risks posed by artificial intelligence (AI) and how this might affect the fundamental social fabric through the erosion of trust.

To this day, Bennett’s work continues to influence discussions on consciousness, evolution and AI, and I have always admired his thought-provoking and unconventional views.

He believed that consciousness is a product of brain activity and does not involve any non-physical or supernatural elements. This perspective suggests that the processes underlying human consciousness could, in theory, be simulated by a computer programme.

Dennett was a staunch supporter of the computational theory of mind – that human thinking works like a computer – arguing that our cognitive processes, such as perception and decision-making, were similar to computations performed by a computer.

He argued that mental states and processes could be understood as information processing. While Dennett did not explicitly claim that all aspects of human consciousness are Turing-computable in every detail, he believed the core mechanisms of consciousness could be simulated by a sufficiently powerful computer.

Dennett, who has helped shape the field of AI research, acknowledged the complexity and depth of human thinking but maintained these were

ultimately physical processes that could be replicated by a computer.

He believed we can predict and understand human behaviour by attributing beliefs, desires and rationality to agents, including machines. This idea aligned with his view that human-like thinking could be simulated with computer programmes.

In essence, Dennett saw the mind working like a sophisticated information-processing machine; one that could potentially be replicated through advanced computer programmes, even if we do not yet have a complete understanding of how it works.

Before ChatGPT was unleashed on an unsuspecting and, later, concerned world in November 2022, the debate between Dennett and his critics was largely theoretical. But the advent of models like ChatGPT and the rapid development of large language models (LLMs) significantly influenced this discourse.

The journey to fully understanding and replicating human intelligence and consciousness is far from over- John Abela

Dennett’s ideas about the mind being a computational system are supported by the emergence of LLMs, which display complex, human-like language abilities. This suggests sophisticated behaviours can arise from intricate computational processes, aligning with Dennett’s views.

However, critics argue these models lack genuine understanding or consciousness, emphasising the difference between simulation and true cognition. The problem of subjective experience remains unresolved and current LLMs do not possess intentionality, beliefs or desires, challenging the idea they have actual mental states.

While LLMs advance the conversation about replicating aspects of human cognition and support some of Dennett’s views, they also highlight the ongoing challenges in fully understanding and replicating human consciousness and understanding.

The debate continues, with LLMs serving as both a tool for exploration and a point of contention in discussions about intelligence and consciousness.

In the meantime, LLMs are becoming more complex and capable with each generation, raising the question of how long it will be before they achieve human-level intelligence.

This question touches on the core of Dennett’s optimism about the potential of computer systems to eventually replicate human cognition.

If LLMs continue to develop at their current pace, they may soon exhibit even more sophisticated behaviours, pushing the boundaries of what we consider to be intelligent and conscious machines.

ChatGPT 5, which OpenAI CEO Sam Altman claimed would be a “significant leap forward”, is currently being trained on a GPU supercomputer (Graphics Processing Unit) and is expected to be released later this year.

Dennett’s contributions to the philosophy of mind and cognitive science, which started 59 years ago, have left an indelible mark on our field. His ideas continue to influence contemporary debates, especially in the context of AI advancements.

As we reflect on Dennett’s legacy, the ongoing development of LLMs serves as both a testament to his vision and a reminder of the complex questions that remain unanswered.

The journey to fully understanding and replicating human intelligence and consciousness is far from over  but Dennett’s work provides a crucial foundation for this quest.

Prof. John Abela is a resident academic in the Faculty of ICT at the University of Malta. His areas of specialisation are AI, machine learning and large language models.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.