On November 2 the government launched the malta.ai National Artificial Intelligence (AI) strategy. As a group of experts in AI and technology policy and in ethics, we applaud the government for embarking on this process.
Given the various ways that AI is already affecting our daily lives, and the speed with which these technologies are developing, it is essential to invest in developing a policy framework that will maximise the opportunities and mitigate the risks of AI.
However, we have some concerns about both the general thrust of the proposed strategy, and some details that have emerged.
Firstly, although the vision document focuses on the economic opportunities for Malta, the word ‘ethics’ is only mentioned twice. Any astute strategy will enable the correct environment for the public and private sectors to capitalise on the opportunities these technologies present.
However, it must also offer policy solutions to the major recognised risks of AI. These include, for example, discussions on the ethical collection and use of Big Data, the issue of algorithmic transparency (the ‘black box’ problem), algorithmic bias and the ‘right to an explanation’ when being subjected to algorithmic decision-making, among others.
In this light, we find the task group’s fascination with Artificial General Intelligence (AGI – that is, human-level intelligence) puzzling. Given the very real challenges that current Artificial Narrow Intelligence technologies (ANI – that is, AI that focuses on one very specific task) pose to policy makers, technologists, and each and every one of us, this focus on a technology that many experts agree may never be created at all seems to be misguided.
The Maltese public deserves policy-making that is designed to tackle pertinent problems and that offers sensible, socially-beneficial solutions, while being forward-thinking and open to future developments in the field.
The need at the moment is to support the responsible application of ANI in all aspects of society – industrial, commercial, research and development, education and government, rather than addressing a speculative future that will only be relevant if AGI is ever achieved.
We are also concerned about the current makeup of the task force. Although the expertise in terms of the technology and the application of it to various business models is not in question, there appear to be no experts in AI ethics, AI policy, AI governance and AI regulation in the taskforce, or any experience of actually developing and implementing governance frameworks. There is also only one female member of the task force, and no international members or members from ethnic minorities.
Given how important it is that all sectors of society are represented in the AI development process in order to ensure democratic use of AI – from data collection through to algorithm deployment – a taskforce that is non-representative of contemporary Maltese society is highly disappointing and frankly unacceptable. We urge the chair of the task force, Wayne Grixti, to rectify this situation immediately by bringing a more diverse membership on board.
Although the vision document focuses on the economic opportunities for Malta, the word ‘ethics’ is only mentioned twice
The lack of expertise and experience in AI ethics may explain some of the questionable choices that have been made by the task force thus far, such as appearing with ‘Sophia’, a speech recognition-enabled animatronic chatbot. It (not she) is nothing but a human-like robot. Prof. Noel Sharkey, Emeritus Professor of AI and Robotics at the University of Sheffield and co-director of the Foundation for Responsible Robotics, described using Sophia as representative example of AI as “very, very dangerous” and said that “[starting] out from the perspective that Sophia can ‘understand’ anything [is] wasting time and money instead of building a sound basis.”
Prof. Virginia Dignum, Professor of Social and Ethical Artificial Intelligence at Umeå University, Sweden, described the aims of malta.ai as “definitely not responsible”.
Prof. Joanna Bryson, Associate Professor in Artificial Intelligence at the University of Bath, described the awarding of the Saudi Arabian citizenship to the same robotic Sophia as “obviously bulls**t,” and asks “What is this about? It’s about having a supposed equal you can turn on and off. How does it affect people if they think you can have a citizen that you can buy?” She continues: “Giving AI anything close to human rights would allow firms to pass off both legal and tax liability to these completely synthetic entities… Basically the entire legal notion of personhood breaks down.”
Similarly, the vision document does not pay enough attention to the importance of getting data ethics right. It is essential that the task force commits to developing a national AI strategy that makes the transparent, fair, and responsible handling of personal data an absolute priority. Conforming to GDPR is crucial, but not sufficient.
Furthermore, significant efforts need to be made to inform the public about how their personal data is being obtained and managed to develop AI tools. This plays a central role in ensuring public trust in, and acceptance of, these technologies. As Facebook recently discovered with its massive share price fall following the Cambridge Analytica revelations in the UK, US and elsewhere, the public do not take kindly to being misled about how their personal data is used.
We also urge the task force to seriously consider the undoubted effects that AI will have on labour markets. Estimates of the number of jobs at risk of displacement by AI-enabled automation over the next decade vary from one-in-8 to one-in-three jobs worldwide. New jobs will also be created, as has been the case with other technological revolutions. Although experts disagree on what the net effect of these technologies will be, one thing that almost all economists agree on is that change is coming and that its scale and scope will be unprecedented.
It is essential for a mature, forward-looking society to support those most at risk of being negatively impacted by these technologies. This may include financial and employment support, such as universal basic income and job guarantee schemes, as well as psychological support. We would urge the malta.ai taskforce to take the opportunity provided by the development of a national AI strategy to clearly lay out how such support will be provided to Maltese society.
In conclusion, we would like to reiterate Prof. Virginia Dignum’s plea: “We need a business culture where ethics and values are core by default: [where] all AI is #responsibleAI.” Rather than adding ‘ethics’ on at the end of the development of a national strategy as an afterthought, we urge the malta.ai task force to make AI ethics the central starting point from which all policy decisions follow.
We look forward to engaging with the malta.ai task force, and to contributing our combined expertise and experience to the development of this national AI strategy. Ultimately, we share the goal of the development of a world-leading policy framework that cements Malta’s reputation as an excellent place to develop cutting-edge technology, while safeguarding the centrality of the human person and ensuring that these technologies are beneficial for all.
We are also keen to hear from everyone who is interested in this issue, and would encourage readers to engage with this public letter by tagging their feedback with #AIEthicsMT on social media. This discussion is too important for it to be restricted to a select group of people; it is only through genuine inclusion that we can ensure that the opportunities of these exciting new technologies are maximised, while the risks are minimised.
* Christopher Bugeja, MSc, Science Communication Consultant;
Dr John Paul Cauchi, MD, MSc, PhD student, Queensland University of Technology;
Brian Delicata, MEnt, Technology Consultant;
Dr Nadia Delicata, PhD, ethicist and media ecologist, University of Malta;
Dr Matthew Fenech, MD, PhD, consultant in AI policy, London;
Raisa Galea, MSc, editor with interest in techno-politics, Isles of the Left;
Matthew Pulis, MSc, computer scientist and MA Digital Theology candidate, CODEC Research Centre, Durham University.
This is a Times of Malta print opinion piece