Since the adoption of the landmark EU Artificial Intelligence Act (EU AI Act) in 2024, the governance landscape across Europe has evolved rapidly. The past year has witnessed a dual approach from the EU comprising strict risk-based regulation, and large investment in AI infrastructure, including €20 billion on an "AI gigafactories" initiative in April 2025. A year on, what do data protection officers, privacy teams, and business leaders need to focus on where AI governance is concerned? We take a look.

Regulatory developments and early enforcement

The ongoing developments in AI regulation reflect not only the pace at which emerging technologies are advancing but also the EU’s commitment to being a leader in both AI innovation and regulation.

Looking across the wider EU, Spain acted early by establishing the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) in late 2023, becoming the first EU country with a dedicated AI regulator. All EU member states are required to designate national AI authorities by August 2025. These bodies will supervise compliance, conduct audits, and coordinate with the European AI Office to ensure consistency across jurisdictions. Data protection authorities (DPAs), such as Italy’s Garante, continue to play a parallel role, particularly where AI involves personal data processing.

EU institutions have also prioritised clarification. The European Commission published an updated Q&A in August 2024 and followed that with a dedicated guidance on general-purpose AI models in March 2025. These documents provide much-needed clarity on the scope and implementation roadmap of the EU AI Act.

In parallel, the EU recently launched the "AI continent action plan", introducing a €200 billion strategy to boost the European AI ecosystem. This includes €20 billion for building AI supercomputing centres and the creation of an AI Act Service Desk to help organisations interpret their obligations. These measures aim to support both innovation and regulatory compliance, avoiding past pitfalls seen during GDPR implementation.

In early 2025, consultations also began on adjusting specific compliance burdens in response to industry feedback. While officials reaffirmed that the risk-based structure of the EU AI Act will remain intact, adjustments may be introduced to streamline documentation and reporting obligations.

Industry response and practical adaptation

Over the past year, industry has engaged with regulators to shape voluntary frameworks while preparing for formal enforcement. One example is the draft code of conduct for general-purpose AI providers, now in its third iteration as of March 2025. This code sets out voluntary commitments on transparency, copyright, and risk management, and may become a formal compliance baseline in future. Major AI developers, including both EU and non-EU companies, have contributed to its development.

Practical compliance steps are also underway. Generative AI services have introduced disclaimers to alert users when content is AI-generated. Messaging platforms and chatbots have begun adding visual indicators to flag AI outputs. Social media platforms have been experimenting with watermarking tools and labelling standards to identify synthetic media, in anticipation of mandatory EU transparency obligations.

Global firms face increasing challenges in aligning with multiple jurisdictions. While the EU AI Act is shaping up to become a global benchmark, like the GDPR, definitions such as "high-risk AI" and documentation standards differ elsewhere. Businesses are looking to adopt shared internal ethical principles and harmonised governance procedures to navigate divergent rules.

Some non-EU firms are voluntarily adopting EU AI standards to prepare for eventual convergence. However, uncertainties remain. Differences in risk classification and enforcement timing could cause strategic difficulties. The EU continues to participate in international alignment efforts, including the Council of Europe’s AI Convention and bilateral dialogues with countries like the United States.

Three steps to map your organisation’s position

Over the past year, businesses have had an opportunity to implement measures that comply with the EU AI Act, specifically ensuring that:

  • Prohibited applications (such as social scoring) cease immediately
  • High-risk AI systems (e.g. recruitment tools or biometric identification) meet strict requirements on transparency, record keeping, and human oversight
  • Limited-risk systems (e.g. chatbots) have the required basic labelling

Organisations can refer to the Commission’s Q&A documents and the AESIA self-assessment templates to identify any compliance gaps. For multinationals, aligning with EU standards now can pre-empt future regulatory conflict and build a compliance baseline that supports global operations. For a detailed breakdown of the EU AI Act’s requirements, refer to HewardMill’s previous analysis “A DPO’s guide to navigating Conformity Assessments under the EU AI Act” and “How privacy teams can prepare for the EU AI Act coming into force”.

Outlook

The EU AI Act has moved quickly from abstract regulation to operational reality. National regulators are implementing measures in line with it, companies are adjusting product features, and many of the requirements are being almost standardly adopted. The concept of “Trustworthy AI” has emerged as a central pillar in the EU’s approach. For data protection officers and privacy teams, this is the time to embed governance mechanisms that address both legal requirements and ethical risks. With full compliance required by 2026–2027, early action will continue to prove advantageous.

HewardMills supports organisations in aligning with the EU AI Act through tailored compliance assessments and governance frameworks. Our approach ensures that risk classifications, documentation practices, and oversight mechanisms meet both current and future regulatory expectations.