On November 1 and 2, the United Kingdom hosted the world’s first AI Security Summit. Representatives from 28 nations, including the USA, China, France and Japan, convened at Bletchley Park to discuss the implications of the rapid advancement of highly capable general-purpose AI models, known as ‘Frontier AI’.

The Summit welcomed other key stakeholders, including leading AI companies, experts and emissaries from prominent institutions, not least the Alan Turing Institute, the Organisation for Economic Co-operation and Development (OECD) and the European Union.

The two-day summit aimed at kickstarting an international debate on the regulation of AI – acknowledging that, whilst its latent disruptive potential could lead to significant benefits across most sectors, it could also bear catastrophic consequences if left unmonitored. The intensive discussions held during the Summit culminated in the so-called ´Bletchley Declaration´ which was unanimously endorsed by the 28 participating nations.  What follows is a brief summary of its contents.

The Bletchley Declaration

The declaration takes off with two key acknowledgements. Firstly, strong emphasis is laid upon the significant global opportunities and challenges presented by AI. Secondly, it recognizes that the AI phenomenon is no longer a futuristic concept, as it is already being applied in various aspects of daily life. This reality urgently calls for AI to be designed, developed, and used in a safe, human-centric, trustworthy, and responsible manner. The statement elaborates on the dual nature of AI – representing disruptive potential and offering transformative opportunities, but also posing major risks regarding human rights, fairness, transparency, safety, accountability, ethics, and bias mitigation.  A sharp focus is directed towards the risks to safety which highly capable AI models entail. It explains how even the use of AI models which lie at the cutting edge of AI development could lead to unforeseen consequences, particularly on matters related to sensitive domains, such as cybersecurity and biotechnology.

The declaration lays paramount importance on international cooperation in addressing such risks effectively. To this end, the declaration calls for collaboration across the whole spectrum of nations, international organizations, private sector, CSOs and academia.  It also foresees the need for an international network of scientific research on frontier AI safety.  Moreover, stress is laid on collaboration, which must be inclusive, in order to bridge the digital divide, thereby ensuring that developing countries can also reap the benefits of AI in a safe and informed manner.

Taking all this into account, the agenda outlined for addressing frontier AI risks is, mainly, two-fold. Initially, a shared set of scientific and evidence-based safety risks need to be identified, understood, and maintained as AI’s inherent capabilities continue to increase. Following this and in fulfilment of it, countries are then urged to collaborate in the development of common policies, fully cognisant that approaches between nations may differ in compliance with their respective legal frameworks. During the process required in formulating these policies, the declaration urges nations to uphold transparency, evaluation metrics, safety testing tools as well as public sector capability development.

The declaration concludes on a note of encouragement. It underscores the uniquely positive potential of AI, whilst at the same time, seeking to also commit to ongoing global dialogue, research, and cooperation to harness AI responsibly, thereby ensuring its benefits for everyone. Finally, it also commits participating states to reconvene again in 2024 to assess progress in these efforts.

The way forward

Pursuant to its high-level participation, the Bletchley Declaration remains a landmark in the history of AI Regulation. It represents the first collective acknowledgement of the risks that Frontier AI poses and may very well act as the catalyst for collaborations and regulatory initiatives which aim to ensure that the development of AI happens within the parameters of the law. The AI Security Summit is expected to become an annual event, as France has already committed to hosting its next edition.

Matthias Grech is an advocate at Ganado Advocates. He has a keen interest in law, technology, and the intersection between the two.

Sign up to our free newsletters

Get the best updates straight to your inbox:
Please select at least one mailing list.

You can unsubscribe at any time by clicking the link in the footer of our emails. We use Mailchimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to Mailchimp for processing.