After much anticipation, on Wednesday the European Commission published its proposal for a regulation aimed at harmonising the regulation of Artificial Intelligence throughout the European Union. The anticipation was heightened after some form of draft of the proposal was leaked a few days earlier.
The proposed regulation has drawn mixed reactions from across the globe since the jurisdictional scope will affect companies producing or using AI systems even outside of the EU if such systems, or their outputs, are to be used in the EU. This is not an unknown way of the EU’s working, since it adopted a similar extra-EU reach with respect to data protection through the GDPR.
The proposed regulation aims at instilling trust in the use of AI systems. It sets out to do this by adopting a risk-based approach with respect to the particular uses of AI systems. Depending on the risks associated with the intended use of an AI system, that AI system may require certification, or its production or use within the EU may even be prohibited. The Commission has categorised risk into four levels: unacceptable, high, limited and minimal. It is the first two that, naturally, take centre stage in the proposed regulation and bear the brunt of the regulatory obligations imposed through this framework.
In principle, barring and regulating AI systems that will, or may, harm individuals is just. Contradictorily, in the technology and AI space, it is in the areas such as health, education, finance and mobility, just to mention a few, that most efforts have been, and are being made, to increase the levels of automation and arrive at a stage where AI is ultimately dependable and will consistently provide a level of output that may, within its limited use, outrun the human mind.
It is this aspect that has drawn the criticism of those who feel that the proposed regulation will stifle the developmental freedom and apply the brakes on the possibility of creating highly dependable AI systems, through machine learning and deep learning techniques whereby the machine develops its own intelligence and provides precise, but also unexplainable, outputs, in these critical sectors. The criticism stems from the fact that, for “high risk” applications, the proposed regulation imposes the need to obtain regulatory approvals, which in turn requires embedding a number of safety measures, including the need for adequate human oversight (thus possibly moving away from full automation), within the AI system.
On the flip side, it is precisely the fear that one depends on a fully automated (and possibly unexplained) process for an output that could determine one’s life, health, livelihood, education, career or financial progression, that needs to be appeased and that the proposed regulation seeks to address. Hence the focus is on regulating defined “high risk” applications of AI systems, namely AI technology that constitutes a product, or a safety component of a product that requires certification prior to being placed into the market (including medical devices, diagnostic software and vehicles), or is used in critical infrastructure, educational and vocational training, employment, essential private and public services, law enforcement, migration, asylum and border control management, the administration of justice and democratic processes.
To someone familiar with the Maltese regulatory framework for innovative technology applications, the system of regulation adopted by the proposed regulation does not sound new. Indeed, the Malta Digital Innovation Authority (MDIA) Act and the Innovative Technology Arrangement and Services (ITAS) Act, have already implemented a system, albeit voluntary, of certifying innovative technology applications (originally aimed at distributed ledger technology (DLT) and smart contracts, but is now being expanded to AI and other forms of innovative technology) through the use of systems audits carried out by auditors that are licensed by the MDIA and are selected by the applicant, and approved by the MDIA according to their competence.
The proposed regulation is mandating a similar system of regulation whereby independent professionals, regulated by member state national authorities, are to certify that the “high risk” AI system meets the regulatory requirements listed in the proposed regulation prior to the system being launched or used within the EU.
As the adage goes, behind every challenge lies an opportunity. The opportunity for Malta could well be that of offering the technology developer an efficient and user-friendly ecosystem that helps navigate through the intricate regulatory web. In this regard, it is be noted that the proposed regulation does not live in a vacuum and constitute the full set of regulatory obligations to be met by AI system developers or users. On the contrary, this proposed regulation exists in the wider universe of regulatory frameworks that come into play and have already been criticised as being inadequate to meet the challenges and realities posed by AI, including the GDPR and the Product Liability Directive.
The fear is that this conundrum of regulation will drive AI developers out of the EU in search of unregulated territories within which to operate without the drain on energy and resources that regulation brings about.
Time will tell whether this fear is ill-founded or otherwise. It is a fact that AI systems, especially those that can really make a difference in one’s life, need to be trusted to be put to use, and irrespective of how good the technology is, the AI developer needs to overcome the lack of blind faith in the technology by his target audience.
The ultimate goal should therefore be to strike the balance between building trust in technology, through regulation, and developmental freedom. The Maltese regulatory framework is already attuned to the frequency band on which the proposed regulation is sending out its sound waves. We now need to capture these signals and roll out a platform and network of initiatives, such as sandboxes, centres of regulatory intelligence and automated solutions that bring the regulators, institutions, educators, professionals and players together in offering an attractive and developer-friendly ecosystem that draws the AI developers to our shores to reach this goal.
Paul Micallef Grimaud is a partner at Ganado Advocates and heads the Intellectual Property, Technology, Media and Privacy practice at the firm.