The recent Paris AI Summit brought together key leaders in the global governance space, with much discussion on the rapid evolution of AI innovation and its impact on global economies. Making headlines was the UK and the US’ somewhat surprising decision not to sign the Paris AI Declaration, which set out to establish a unified global framework to ensure transparency, ethical AI use, and alignment on human rights.  So far Canada, India, China, and France have signed the Declaration, with others signalling intent to do so. So why have the UK and US refused to sign and what does it mean for privacy teams?  

Opting for economic and national sovereignty over global regulatory alignment 

Although two of the leading global economies rejected the Paris AI Declaration, their reasons reflect fundamentally different approaches to AI governance. Let’s explore the current approach adopted by both: 

  • The UK has long favoured adaptable regulations that align with national priorities. At the summit, UK representatives emphasised that the Paris AI Declaration does not provide enough specific enforcement mechanisms and may conflict with domestic policies and priorities. Since Brexit, the UK feels its regulatory approach is more adaptable and fosters “agile governance.” This strategy reflects the UK’s tradition of gradualism in technology governance.  

The UK’s regulatory model relies more on industry self-regulation rather than strict government intervention. Initiatives like the AI Safety Summit aim to promote self-regulation within the industry and strengthen ethical assessments. The UK is more inclined to set specific regulatory rules based on the needs of different industries, buttressed by fundamental overarching laws and regulations such as the UK General Data Protection Act, rather than applying a one-size-fits-all uniform regulatory regime approach. 

  • The US on the other hand places more emphasis on market freedom and innovation. US representatives’ remarks at the conference reflect concerns that international regulation could hinder the global leadership of US-based tech companies and stifle innovation. The US tends to prioritise self-regulation by tech companies, with government intervention occurring only when necessary. For example, the US encourages innovation through the American AI Initiative, relying on tech companies like Google, Microsoft, and OpenAI to set their own safety standards, rather than implementing mandatory regulatory measures. This “reactive intervention” approach contrasts sharply with Europe’s “precautionary” model. 

Opting not to sign the declaration at this stage reflects an attempt to balance economic competition and national security. On one hand, there is the pressure of national economic competition, especially in the field of innovative technologies like AI, where overly strict global governance could limit the innovative space of domestic companies. On the other hand, national security concerns cannot be overlooked. The use of AI in areas affecting national sovereignty and information security has led the UK and the US to adopt a more cautious approach. 

Impact on global AI governance and business operations 

The stances of the UK and the US highlight the challenges of creating a unified global AI governance framework. Regulatory fragmentation could complicate compliance for businesses operating across borders. As AI becomes increasingly integrated into day-to-day operations, the key challenge for businesses and privacy teams will be navigating these different regulatory environments while ensuring compliance.  

As AI systems increasingly process personal data, privacy teams must ensure that data handling remains transparent, secure, and compliant with diverse regulations. Businesses must stay agile, updating privacy protocols as AI technology evolves and more countries develop their own regulatory frameworks. This means privacy teams will play a critical role in maintaining compliance, mitigating new risks, and addressing privacy concerns in AI-driven processes. 

What’s next for AI governance? 

Businesses must stay proactive to adapt to evolving regulations touching on AI, and privacy teams will play a key role in guiding organisations through this complex landscape.  

HewardMills’ AI governance experts can help manage the complexities of emerging AI laws and compliance. To discuss this topic or anything else data protection and privacy-related, please contact us at dpo@hewardmills.com. 

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at dpo@hewardmills.com.