The ability of automated vehicles (AVs) to interact with and impact the physical world without the need for human intervention, applying artificial intelligence (AI) technology to sense, learn from and adapt to its surroundings, gives rise to a host of intriguing questions on, among others, the legal front.

The notorious ‘trolley problem’, previously just a philosophical riddle of the mind, has found itself at the forefront of theoretical, technical and even legal discussions surrounding the introduction of AVs into society.

Essentially, the trolley problem is an ethical thought experiment which contemplates a bystander being faced with a perplexing choice: save five people from being hit by a runaway trolley by personally intervening to divert it to a different track on which it will hit and kill one person, or withhold from intervening?

In the context of AVs, how would an AI-driven vehicle, unable to brake in time and bereft of the ability to make the type of moral assessment a human may make instantly, determine whether to crash into the vehicle ahead of it and endanger such other vehicle’s occupants, or divert its trajectory and endanger its own occupant?

The reality is that automation can never totally eliminate the possibility of an accident

While some argue that there can be no such thing as the trolley problem in the operation of an automated car, since a properly functioning AV should be capable of detecting and reacting to a collision in advance, the reality is that automation can never totally eliminate the possibility of an accident.

A vehicle − any vehicle − moving at a certain speed, even with the most refined software and hardware systems, will require some amount of time to come to a halt from the moment the brakes are engaged. Furthermore, AVs are not precluded from experiencing unpredictable behaviour on the roads.

Who, then, is to be held responsible when an AV – an inanimate object which is not (at least not as yet) a legal person – causes damage to a victim’s person or property? And against whom is that victim (or its heirs) expected to seek redress? Here, a responsibility gap arises.

Legislators are, therefore, faced with two options: (1) retaining existing legislation and leaving the assigning of responsibility in the hands of the courts; or (2) actively and pre-emptively preparing for the mass release of AVs through the development of ad hoc legislation and reconsideration of existing regulation.

The same can be said with respect to the multiple other areas of law relevant to AVs and their deployment, including, for instance, insurance, data protection, privacy, cybersecurity, intellectual property and, of course, traffic regulation.

The adequacy of Malta’s existing legal framework in this respect, and whether − and if so, how − legislators should respond, constitute an integral part of the study carried out by legal researchers supporting Project MISAM (Malta’s Introduction of Shared Autonomous Mobility) and will form the subject of the next article in this series.

This article is the third in a series of articles relating to Project MISAM and the deployment of shared autonomous vehicles in Malta. For more information, contact Odette Lewis at odette.lewis@um.edu.mt.

Project MISAM (REP-2020-017) is financed by the Malta Council for Science and Technology, for and on behalf of the Foundation for Science and Technology, through the FUSION: R&I Research Excellence Programme.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.