The EU Council presidency and the European Parliament have provisionally agreed on a unified AI regulation for Europe. These proposed rules aim to guarantee the safe use of AI systems in the EU market, ensuring they comply with fundamental rights and EU values.  

The deal refines the definition of an AI system to clearly differentiate it from simpler software, aligning the definition with the Organisation for Economic Co-operation and Development (OECD)’s proposed approach.  

Additionally, the provisional agreement specifies that the regulation will not cover areas outside EU law jurisdiction such as national security, military, and defence purposes, or affect individuals using AI for non-professional purposes.  

While the text has yet to be finalised, the provisional agreement:  

  • Provides rules for high-impact general-purpose AI models that can cause systemic risk in the future, as well as high-risk AI systems. 
  • Provides a revised system governance and enforcement. 
  • Extends the initial list of prohibitions but retains the possibility for law enforcement authorities to use remote biometric identification in public spaces, subject to specific safeguards. 
  • Requires deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.  

Classification of AI systems and prohibited AI practices  

The provisional agreement establishes a multi-level protection framework for AI systems in the EU, classifying them based on risk potential.  

High-risk systems are subjected to stringent regulations, while those with lower risk have minimal transparency requirements. The agreement simplifies the requirements for high-risk AI systems and clearly defines the responsibilities of different stakeholders in the AI value chain.  

The provisional agreement would prohibit certain AI-related practices, such as cognitive behavioural manipulation, mass collection of facial images, emotion detection in workplaces and educational settings, social scoring, sensitive data inference through biometric categorisation, and certain forms of predictive policing.  

General purpose AI systems and foundation models

The agreement introduces measures for general-purpose AI (GPAI) systems and guidelines for foundation models, which must meet certain transparency standards before market entry. Certain “high impact” foundation models face more rigorous regulatory requirements.  

Penalties

The agreement includes a new tiered penalties regime, including fines of the following amounts: 

  • Up to €35 million or 7% of global annual turnover for using banned AI applications 
  • Up to €15 million or 3% of global annual turnover for certain legal obligations 
  • Up to €7.5 million or 1.5% of global annual turnover for providing false information during an investigation.  

The agreement also clarifies that individuals or organisations can file complaints relating to non-compliance with the appropriate market surveillance authority. These complaints will be processed according to the authority’s established procedures. 

The provisional agreement sets a two-year timeframe for the AI act to come into force after its enactment, with certain provisions having specific exceptions to this timeline.  

As a global B Corp organisation, HewardMills is ready to partner and support your organisation’s needs to safeguard personal data and tackle challenging ever-evolving global data protection regulatory requirements. Contact our team if you want to discuss any of the topics or regulatory updates discussed. 

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at dpo@hewardmills.com.