‘Grace’ is a lifelike robot nurse, built with artificial intelligence to bring emotional care for patients during the pandemic and make them feel comfortable and at ease.

Artificial intelligence (AI), the self-didactic technology which detects patterns from historical data, is pervading all walks of life, be it healthcare or the financial services industry.

The High-Level Expert Group on AI, tasked by the European Commission to draft AI ethics guidelines, defined AI as ‘‘systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals’’.

In the finance world, AI has evolved substantially over the recent decades and its utility ranges from the performance data monitoring, establishing creditworthiness and credit scoring, as well as in combatting cybercrime and money laundering. However, the exponential use does not come without a fair amount of risks attached, in particular in machine learning applications where risks of data bias can lead to erroneous results being generated by the AI due to statistical errors or interference during the machine learning process. 

The paucity in AI regulation and the multiplicity in AI practices led the European Commission to focus on this technology in its Digital Finance Package, launched at the end of 2020 to ensure that the EU financial sector remains competitive while catering for digital financial resilience and consumer protection.

Last year, the Commission went a step further and issued a proposed regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending certain Union legislative acts (Artificial Intelligence Act, also referred to here as ‘proposal’). This proposal, which is not yet in force, would provide public safety and legal certainty, as well as the facilitation of investment and innovation in AI.

One of the approaches adopted in the Artificial Intelligence Act is to completely ban specific categories of AI systems, such as systems applying subliminal techniques beyond a person’s consciousness, or which exploit vulnerabilities of specific groups. Other AI systems are also classified as high-risk – and in this pigeonhole are AI systems used on a general scale to evaluate the creditworthiness of natural persons or establish their credit score.

The European Commission is proposing mandatory requirements applicable to providers, distributors, importers, operators and even users of AI systems, including (i) risk management systems to be applied through the high-risk AI system’s lifetime; and (ii) providing the AI system with training, validation and testing of data, data governance and management practices, technical documentation, record-keeping, transparency obligations, human oversight obligations, reporting of serious incidents or malfunctioning, cybersecurity measures and conformity assessment obligations. The introduction of a European artificial intelligence board has also been proposed.

The financial sector is one in which the challenges relating to the use of AI need to be evaluated well, before and when deploying such technological solutions, in view of the risks and individual rights that are at stake

Towards the end of last year, the European Central Bank published its opinion welcoming the Artificial Intelligence Act. While noting the increased importance of AI-enabled innovation in the banking sector, given the cross-border nature of such technology, the supranational body held that the Artificial Intelligence Act should be without prejudice to the prudential regulatory framework to which credit institutions are subject.

The ECB acknowledged that the proposal cross-refers to the obligations under the Capital Requirements Directive (2013/36 or ‘CRD V’) including risk management and governance obligations to ensure consistency. Yet the ECB sought clarification on internal governance and outsourcing by banks who are users of high-risk AI systems.

Raising its concerns as to its role under the new Artificial Intelligence Act, the ECB reiterated that its powers derive from article 127(6) of the Treaty on the Functioning of the European Union (TFEU) and the Single Supervisory Mechanism regulation (EU) 1024/2013 (SSM regulation), which instruments confer on the ECB specific tasks concerning prudential supervision policies of credit institutions and other financial institutions.

Recital 80 of the proposal provides that ‘‘authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions”.

The bank held that ‘market surveillance’ under the Artificial Intelligence Act would also consist in ensuring public interest of individuals (including health and safety). In a nutshell, the ECB informed the Commission that the ECB has no competence to regulate solutions like Grace the robot, but it will only ensure the safety and soundness of credit institutions. To this effect, the bank suggested that (i) a relevant authority be appointed for health and safety risks related obligations; and (ii) another AI authority be set up at Union level to ensure harmonisation.

In parallel, the ECB also recommended that the Artificial Intelligence Act be amended so as to mandate that, that in relation to credit institutions evaluating creditworthiness of persons and credit scoring, an ex post assessment be carried out by the prudential supervisor as part of the SREP, in addition to the ex ante internal controls that are already listed in the proposal.

Interestingly, the Bank for International Settlements, in its newsletter on artificial intelligence and machine learning, raised its concerns in view of the cyber, security and confidentiality risks, data governance challenges, risk management, biases, inaccuracies and potential unethical outcomes of AI systems, ‘‘the committee believes that the rapid evolution and use of AI/ML by banks warrant more discussions on the supervisory implications”.

While the Artificial Intelligence Act has not been agreed upon in its final form and may be substantially changed before its acceptance, it is safe to say that the financial sector is one in which the challenges relating to the use of AI need to be evaluated well, before and when deploying such technological solutions, in view of the risks and individual rights that are at stake.

James Debono is an associate and Erika Gabarretta is an advocate at Ganado Advocates.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.