Once again, Artificial Intelligence (AI) has become a trendy term that brings either trepi­dation or inspiration. At one end of the scale, people fear that the advancement of AI will result in machines taking over our lives. On the other side, people believe that AI is the answer to everything.

The first step is to assess whether AI today can take over humanity as we know it. Artificial General Intelligence is still far from being achieved – having an AI system that is able to solve all types of human-level tasks with equal proficiency is a very hard problem. What we currently have is Artificial Narrow Intelligence – systems designed to solve tasks in a narrow domain.

Historically, AI systems were developed through a set of well-specified rules. The system, given data to be processed, would apply the rules and provide answers accordingly. Nowadays, with the advent of Machine Learning, we feed the data and the optimal answers to an algorithm, and it extrapolates patterns and rules that connect the dots between the data and the answers.

One of the main problems with Machine Learning is that at times we do not know what the connecting dots are – the algorithms do not output specific rules that can be used in future to trace the decision-making process. Another problem is that of bias. The algorithm learns from the data provided, so biases, knowingly or unknowingly, could end up as part of the decision-making mechanism.

The risk of AI is not that it can outperform humans. Having software that outperforms humans has resulted in great achievements for humankind, from landing man on the moon to detecting tumours with greater precision. The main risk that we face is more ethical in nature.

A case in point is the Facebook/Cambridge Analytica scandal and other data brea­ches. Prior to the scandal, many people were concerned with the data collection done by Facebook. However, the severity of it only hit home when it became known that the data was used without users’ knowledge, speeding up the introduction of the GDRP framework in the EU.

Regulating in advance is challenging. At most, we make short-term realistic predictions of how far and fast AI advances. It is difficult to fully understand the repercussions that can be caused by future AI systems. For instance, we already know that we need to regulate self-driving cars – who is responsible in the case of an accident involving a self-driving car?

A second difficulty is that large corporate companies are the main drivers of AI advancements, and therefore much of the AI component in their systems remains concealed and unknown to the user. Using the previous example, in deciding the level of responsibility of a self-driving car, we would need to understand what led to the accident and have access to the AI components of a self-driving car.

It is clear that regulating at a local level is not sufficient to protect users. AI is a global phenomenon and effort is needed at a global level to ensure that the future of Artificial Intelligence can be of true benefit to society as a whole, while ensuring that our human rights are preserved and advanced further than today.

Dr Claudia Borg is a lecturer with the Department of Artificial Intelligence at the University of Malta.

Sound bites

• Current research by MIT is looking at what type of moral choices autonomous driverless cars might have to make in future. Through the experiment ‘Moral Machine’ ( http://moralmachine.mit.edu/ ) researchers hope to understand the moral differences between different groups of people.

https://www.sciencedaily.com/releases/2018/10/181024131501.htm

• IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. Their designs draw on concepts from the human brain and significantly outperform conventional computers in comparative studies.

https://www.sciencedaily.com/releases/2018/10/181003162715.htm

For more soundbites, listen to Radio Mocha on Mondays at 7pm on Radju Malta and Thursdays at 4pm on Radju Malta 2. https://www.fb.com/RadioMochaMalta .

Did you know?

• We are born with the fear of falling and fear of loud noises, which are incorporated in our DNA as part of our survival mechanism.

• Archaeologists have found evidence of man-made glass that dates back to 4000 BC; this took the form of glazes used for coating stone beads.

• Land snails have two sets of tentacles that stick out, the longer set of tentacles are the ones that have the snail’s eyes. This way they can move their tentacles around to get the best view. Water snails, on the other hand, have eyes at the tentacles’ base and they have only one pair of tentacles.

For more trivia see: www.um.edu.mt/think

Comments

Comments not loading? We recommend using Google Chrome or Mozilla Firefox with javascript turned on.
Comments powered by Disqus