Nations commit to AI safety with historic Bletchley Declaration

Amanda Khatri

Amanda Khatri

Editorial Manager

Global leaders from six continents have agreed to work together to limit the risks and explore the opportunities of AI.


UK Prime Minister Rishi Sunak spearheaded the first-ever Artificial Intelligence (AI) safety summit at Bletchley Park in the UK on 1 and 2 November 2023, which gathered governments, developers and industry luminaries.


More than 23 signatories, including the United States and the European Union (EU), committed to establishing a global framework for AI safety that will allow individual jurisdictions the freedom to pursue their own regulatory approach.


The Bletchley Declaration requires nations to ensure the safe development and use of AI in a range of different industries including health and education, food, technology, security, science, clean energy, biodiversity, and climate.


High-risk sectors such as financial services, healthcare, and technology using AI, for example, AI-powered chat bots, will be mandated to meet specific technical standards, drafted in the coming years.


Countries are to implement robust regulations and technical standards as well as consider a “pro-innovation” attitude with “proportionate governance” and a “regulatory approach that maximises the benefits and takes into account the risks associated with AI.”


“Never before has AI been so democratised, with regard to its development and consumption,” said CUBE’s Head of AI and Product, Dr Yin Lu. “Never have datasets been so large, and compute so powerful. Never before have the applications of AI been so vast, permeating every aspect of our personal and professional lives.”


“This summit represents a promising step forward in the landscape of AI development. With the rise of foundation models, the question has shifted from ‘can we’ and to ‘should we’. To answer it, we urgently need a global regulatory framework in place.”


What emerged from the first AI Safety Summit?

Leading thinkers in the space addressed AI’s borderless nature and how best to mitigate its risks. What emerged is a plan for a cross-border approach that consists of “common principles and codes of conduct”.


Governments will put guidance in place that proactively addresses AI risks through collaborative model testing. While “the high-level principles from the summit are sound” achieving “true international collaboration is going to be extremely difficult”, adds Dr Yin Lu.


Tech figurehead and AI investor Elon Musk suggests starting with a “third-party referee” to prevent risks from developing, as “you’ve got to start with insight before you do oversight,” he told reporters.


Key takeaways from the summit

  • AI development should emphasise safety, human-centric design, trustworthiness, and responsibility.
  • AI’s potential for economic growth, human rights, and freedoms should be balanced with ensuring public trust and global inclusivity.
  • Ensure the safe development of AI and opportunities for AI to be used for good.
  • Human rights should be protected through transparency, fairness, accountability, regulation, safety, sufficient oversight, bias mitigation, privacy, and data protection risks are to be addressed.
  • Regulation should target misinformation and deceptive content.

The agenda for addressing AI risks on national and international fronts:

  • Continuously develop a shared, evidence-based, and scientific understanding of AI safety risks.
  • Building risk-based policies among the countries that attended the summit, fostering collaboration, and acknowledging that approaches may vary based on unique circumstances.


The next AI safety summit will be in South Korea, six months from now, and another in France, six months after that.


Who will be the thought leader on AI regulation


The world’s eyes have been focused on AI this past year and the rapid increase in the use of generative AI has instigated a race to regulation by governments and regulators worldwide.


The EU is in the final stages of drafting its AI Act and over in the States, President Joe Biden recently announced an Executive Order to ensure safe, secure, and trustworthy AI, setting standards for AI safety.


China has introduced regulatory frameworks that address the risks of advanced AI algorithms and generative AI. China’s vice minister of science and technology, Wu Zhaohui, said that Beijing is interested in building an international “governance framework” and “countries regardless of their size and scale have equal rights to develop and use AI.”


Despite hosting the summit, the UK may have missed an opportunity to announce its own specific approach, experts said.


“Considering the international context where the EU, US, and China, amongst others, are already implementing regulations to mitigate algorithmic risks, it’s imperative for the UK to follow suit,” said Dr Ana Valdivia, lecturer in AI, Government & Policy at the Oxford Internet Institute, University of Oxford.

“The UK Prime Minister should seriously consider initiating AI regulation within the UK; the time might not be too early but potentially too late.”


CUBE comment

The AI summit has cemented global regulatory efforts towards governing AI, and it won’t be long before compliance officers are given another regulatory framework to implement into their organisation’s policies and controls.


Ever since the conception of generative AI such as Chat-GPT and DALLE, the use of governance of AI safety is much needed to address risks and protect consumers from security risks, data breaches, and other malicious intent uses of AI.


The AI regulatory drumbeat is only predicted to quicken. To stay ahead of the curve and ensure your firm doesn’t miss a beat, leverage an AI-powered regulatory solution to master AI regulatory compliance.


Using machine learning and natural language processing, CUBE captures and classifies all AI-related regulatory content, in every jurisdiction and every language and maps it automatically to the firm’s compliance framework. Find out more by getting in touch below.


To ensure your firm complies with private fund and privacy regulations, speak to CUBE.