In the second episode of the “Ganado Meets Tech” podcast, Ganado Advocates’ IP/TMT partner Paul Micallef Grimaud, met with professor Alexiei Dingli to introduce the emerging technology of Artificial Intelligence (AI).
What is AI?
AI can be defined as the field of study that seeks to get machines to simulate intelligent actions performed by humans.
Despite the impression that the definition found in the European Commission’s Draft AI Regulation is wide, Dingli argues that it is actually restrictive, given the definitions of the two terms “Artificial” and “Intelligence” which are not limited to software.
The different levels of intelligence
There are different degrees of “intelligence”. The most essential is automation, or the application of a set of commands and rules. Learning, on the other hand, is far more complex and exciting and in the field of AI involves machines that take decisions autonomously and that can communicate not just with other programs but also with humans.
Other subfields of AI include natural language processing (where a computer processes and understands natural languages – take Siri, Alexa or Google Assistant as an example) and computer vision (where the AI is capable of understanding a picture or video).
Without us necessarily being aware of it, we are fully immersed in AI, from systems that set the temperature for air conditioners, applications linked to home appliances and cars, email filtering and social media.
But there is no disputing the fact that Machine Learning is the “superstar of AI” due to its widespread use and adaptability in all the other sub-fields. In this discipline, a machine is fed data which it is trained to learn from and in turn develop its own intelligence.
Addressing distrust in AI
It is a fact that persons hesitate to entrust risky activities entirely to machines. This said, we very often do this without knowing. Aeroplanes are a prime example, where 90% of flights are flown by an AI agent with most landings being fully automated. Yet it is the figure of the pilot sitting in the cockpit that elicits trust, and not the machine.
Even though a machine may, within its programmed limitations, outperform a human being, the distrust is even more accentuated when it is difficult, if not impossible, to understand how the machine reached its conclusions. This is known as the “black box” phenomenon.
Technology is not flawless and there have been a number of instances that have demonstrated the need to understand the machine’s “thought process” in arriving from a defined input to a particular output. Results, although accurate, may sometimes be based on an identifier that is not necessarily pertinent to the answer itself or where the result is only accurate to the AI itself and not the problem being addressed in practice.
A machine may correctly distinguish between pictures of huskies and wolves, but may base its results solely on the surroundings (wild versus domestic) in the picture; a medical program trained to decide whether patients with pneumonia should be sent home may, based on the data fed to it, decide that all patients with pneumonia and asthma should be discharged, failing however to understand that such patients should not be sent home but rather directed to other wards for more intensive care.
The issue of bias
These examples show how pertinent the adage “garbage in, garbage out” is to AI and machine learning. This raises concerns of sexual and racial bias in the algorithms used by the machine based on the biased data fed to it, and consequently unfair and unjust conclusions being reached.
It is extremely hard, if not impossible, to eliminate bias because, in Dingli’s words, “data ultimately reflects our world and our world is biased ”. This is demonstrated by a Microsoft chatbot experiment in which the bot was left to train on data that it freely captured over the internet. Within 15 hours it had turned into a “racist and Nazi sympathiser”.
The responsibility for taking the necessary precautions is therefore on the human programmers. This ties in with what the European Commission is advocating – the need to have “Trustworthy AI”.
With this aim in mind, the Commission’s draft AI Regulation classifies AI in accordance with the risk of its application in society, creating a form of traffic light system where there are applications which are outright detrimental or too risky, and hence prohibited; others which are risky and need to be appropriately controlled; whilst others can be applied without too much concern and need for regulatory intervention.
The second band of high-risk applications are the ones that may have the highest positive impact on society, if applied properly. They include applications that deal with one’s health, wellbeing, finances, jobs and education. Here the Commission is imposing an obligation for such applications to obtain certification by approved and regulated certifiers, on the basis of criteria aimed at eliminating, or balancing out, the risks attached to the particular use of the AI.
AI and privacy
Finally, one cannot forget that the use of data in AI is also subject to the levels of protection granted by the GDPR, the European Convention on Human Rights and the European Union Charter on Human Rights. Ultimately, privacy is a fundamental human right and, save for the exceptions contemplated in the law, no personal data should be used unless the data subject has freely and expressly consented to his/her data being used after being fully informed of how this data will be used, by whom and to what end.
Moreover, no decisions that could impact, amongst others, one’s health, well-being, livelihood and career progression should be taken based on the automated processing of one’s personal data if not with the data subject’s free, informed and express consent.
Looking to the future, Dingli believes that ubiquitous computing will become the norm and the traditional computer and mobile phones as we know them today will be replaced by augmented reality and applications built into spectacles or even contact lenses.
Most certainly, there is no stopping how technology, and in particular AI, will continue to shape the world we live in and our ways of life.
Independent journalism costs money. Support Times of Malta for the price of a coffee.Support Us