UK Prime Minister Rishi Sunak recently organised a summit on artificial intelligence (AI), to which he invited global leaders, tech executives, academics and civil society figures, ironically enough at Bletchley Park in Milton Keynes, which was the base for codebreakers during World War II. Those attending included US Vice President Kamala Harris, the European Commission President Ursula von der Leyen and Elon Musk of Tesla fame.
As such, it was quite an interesting gathering. At the end of the summit there was the signing of a declaration, which recognises the risks associated with the development of AI. There will be another summit in 2024, meaning that the topic will not disappear from the global agenda anytime soon.
We need to appreciate that AI is not something new and we have been making use of it for some time. Think of all the adverts specifically targeted at you on social media. That targeting is the result of AI.
However, the recent progress that has been made in this area is starting to scare people. It is starting to scare people because there is the fear that the human being will be subject to machines and the fear that many jobs will just disappear. Since people are afraid of AI, governments feel the need to intervene to regulate it.
The statement by Musk that AI is “one of the biggest threats to humanity” caught the headlines. One wonders whether this statement was motivated by a real concern or simply by the fear of being left out. However, the issue of whether AI is a threat to humanity or not remains a contentious one. The summit did not offer an answer. On the other hand, there does appear to be consensus that, in the short term, AI can be misused for economic and political purposes.
AI needs to remain on the agenda of public policy as much as climate change
Musk also spoke about jobs. He went as far as saying that AI will be able to do any job we can conceive. In an effort to counterbalance this possibility, technology companies have accepted to allow governments to vet their AI tools.
We need to keep in mind an important consideration. Irrespective of progress in machine learning, there is still no artificial intelligence that is capable of experiencing human sentiments. It knows the words to express these feelings but it does not mean that the machine is experiencing the emotions.
Moreover, the human brain is capable of doing certain things which are not definable and, as a result, cannot be programmed on a computer. This means that AI can never faithfully simulate the human brain. This is not me claiming it, but scientists who have founded AI.
Maybe the key element that emerged from the AI Safety Summit in the UK is that we should not be seeking to control and regulate AI, but we should control the companies that develop it. It is the companies that may commit illicit activities and not AI itself.
On the other hand, the history of mankind has shown us that technology, in its broadest sense, has always rendered jobs obsolete, but has also created new ones. The job of training pigeons to deliver messages has long disappeared. This needs to be appreciated in this country as well. One way of moving our economy up the value chain is by making sure that we maximise the use of technology.
The debate about AI cannot stop. It needs to remain on the agenda of public policy as much as climate change. Neither a doomsday approach nor a laissez-faire approach will help such a debate.
PS: By the way, AI can also stand for ‘astonishing incompetence’. That can also be very dangerous!