Since the EU AI Act came into force in 2024, organisations have faced significant challenges in implementing its regulatory requirements, particularly regarding conformity assessments for high-risk AI systems. For privacy teams and Data Protection Officers (DPOs), these challenges bring nuanced considerations on behalf of the organisations they support. Balancing transparency, GDPR compliance, coordinating across teams, and mitigating risks requires a strategic approach.One that many teams are still striving to refine.
With the 2 August 2026 deadline for governance rules on high-risk AI models fast approaching, we explore how to ensure your organisation can meet the Act’s requirements for compliance, with a closer look at the key challenges DPOs may encounter with conformity assessments and the strategies that can simplify the approach to managing them.
The power of Conformity Assessments
Conformity assessments ensure that high-risk AI systems meet the required standards for transparency, safety, privacy, and accountability. The process includes both ex-ante (before deployment) and ex-post (after deployment) assessments and involves either internal assessments (conducted by the organisation) or third-party assessments.
For privacy teams, the conformity assessment serves as a critical mechanism for managing privacy risks, ensuring that AI systems are built with data protection principles embedded from the outset. DPOs play a crucial role in this process. They help interpret the AI Act’s requirements, evaluate data processing practices, and ensure the proper safeguards are in place. They also support continuous monitoring to ensure ongoing compliance with the GDPR and the AI Act. However, many organisations struggle with compliance due to fragmented documentation, insufficient collaboration between different departments, and difficulties assessing ethical risks.
The core requirements and practical strategies
The EU AI Act’s CA demands a strategic balance between regulatory adherence and practical execution. For privacy teams and DPOs, the challenge lies in translating legal mandates into actionable steps. At its core, compliance is based on three pillars:
-
Technical documentation and transparency
DPOs must ensure every design choice, dataset modification, and algorithmic adjustment is meticulously documented. This is not mere administrative work—it forms the foundation of audit readiness. Transparency also extends to explaining AI decisions in accessible and plain language.
Consider creating user-friendly resources, such as a “How Our Hiring AI Works” guide, to enhance understanding and trust.
-
Effective data governance and risk management
Establishing clear protocols for how the organisation would internally handle data drifts and fairness concerns is essential for maintaining compliance and mitigating reputational risks.
Beyond GDPR compliance, organisations should implement bias-detection tools such as IBM’s AI Fairness 360, and host cross-functional “risk sprints” to address emerging threats.
-
Human oversight and robust processes
Organisations can benefit from more reliable AI-driven systems by ensuring human oversight mechanisms are in place, with well-defined and enforceable actions.
Define clear and explicit thresholds for human intervention in AI decision-making processes. Partner with cybersecurity teams to conduct stress-tests against potential vulnerabilities, such as data poisoning and adversarial attacks.
While these priorities form the foundation of compliance, successful implementation ultimately depends upon practical execution. First, dynamic documentation is essential. By using collaborative platforms, organisations can maintain up-to-date records that automatically reflect system changes. Second, automated monitoring can be deployed through dashboards, allowing for real-time tracking of key metrics such as data drift, user complaints, and system performance anomalies. Finally, human ethics can be embedded into AI workflows by requiring manual sign-off for high-impact AI decisions.
These strategies ensure that organisations can move beyond regulatory checklists and establish a robust and dynamic AI compliance framework.
Staying ahead of the challenges
The EU AI Act is not a one-time hurdle, it requires ongoing attention. As AI systems evolve, so do risks and regulations. By integrating these strategies, organisations can shift from reactive compliance to proactive governance, ensuring their systems remain ethical, competitive, and audit-ready.
HewardMills’ team of experts can help you navigate conformity assessments with tools, templates, and workshops designed for high-risk AI practices, providing strategic advice that ensures compliance is to your strategic advantage.