The UK government has detailed its AI regulation plans, describing a “common sense” framework with no new legislation initially planned. In the EU, one of the impacts of the highly ambitious AI Act is the potential to add around 5% in compliance costs to affected software developers. 

By relying on the existing powers of sectoral regulators, the UK says it will enable innovation to flourish. In fact, the government’s 6 February 2024 AI regulation consultation response mentions “innovation” over 200 times. 

But while safeguards might be initially expensive, poorly implemented AI systems can cost much more. If regulation helps avoid discrimination, emotional harms, and human rights violations, the EU’s more robust approach could win out in the long term. 

The UK: A ‘pro-innovation approach’ 

“The world is on the cusp of an extraordinary new era,” says UK Secretary of State for Science, Innovation and Technology Michelle Donelan in her foreword to a paper on the UK’s “common sense, pragmatic approach” to AI regulation. 

The government’s vision sees bodies such as the Information Commissioner’s Office (ICO), communications regulator Ofcom, and the Competition and Markets Authority (CMA) guided by a set of “cross-sectoral AI principles” rather than enforcing new legislation. 

But the UK isn’t advocating a “do nothing” approach among other measures, the government’s plans involve: 

  • Making AI-related reforms to the UK General Data Protection Regulation (GDPR) via the Data Protection and Digital Information Bill (DPDIB), 
  • Launching an AI Management Essentials scheme, setting a minimum good practice standard for AI companies, 
  • Developing a Code of Practice for cyber security in AI, based on guidelines issued by the National Cyber Security Centre (NCSC) last November. 

The government concedes that new law may be necessary to tackle the risks created by “highly capable general-purpose AI systems” (perhaps including OpenAI’s GPT model), particularly to help apportion liability among developers, distributors and users of such systems. 

But overall, the UK government’s approach to AI regulation which it variously describes as “agile”, “dynamic”, and “pro-innovation”— looks very different to that emerging across the English Channel. 

The EU: A ‘global standard’ for AI regulation 

The EU’s AI Act is nearing finalisation and has evolved in many ways since the European Commission’s initial proposal in April 2021. 

The structure of the law remains substantially unchanged in the latest version of the AI Act text. For example:  

  • Actors in the AI supply chain are grouped in various roles, such as “product manufacturer”, “provider”, “deployer”, “importer”, “distributor”. 
  • Types of AI systems and uses are classified according to risk. The law applies strict rules to “high-risk” AI systems, imposes lighter obligations on certain lower-risk systems, and prohibits some practices altogether. 
  • Certain AI systems must undergo a “conformity assessment” and bear a CE Mark before being made available in the EU. 

However, owing to developments in generative AI and the EU’s lengthy “trilogue” negotiation process, the AI Act’s details have changed quite significantly over the past three years. 

New rules require the registration of powerful “general purpose AI systems”, prohibitions have been extended in some areas and weakened in others, and various AI systems have been reclassified as “high-risk” in an attempt to satisfy the EU’s various parliamentary factions and member states. 

But despite the law’s stated goal of encouraging AI startups, the EU’s own impact assessment predicts significant compliance costs, with up to 15% of AI applications subject to rigorous “high risk” requirements. 

Another study, from the applied AI Initiative, predicts that around 18% of AI systems would be “high risk”— and that a further 40% of systems could also fall into this category. 

Establishing your organisation’s AI approach 

While we wait for regulators to set out clear legal rules, individuals are already seeking reassurance about how organisations are using AI. 

Companies integrating AI into their products and processes are developing AI policies based on responsible principles, corporate ethics and compliance with existing legal obligations. 

HewardMills is carefully tracking the development of AI regulation worldwide. In the meantime, our team can help you leverage AI in a way that minimises risk, complies with data protection law, and reassures your employees and customers that you’re taking your responsibilities seriously. 

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at dpo@hewardmills.com.