Artificial Intelligence (AI) is transforming the way organisations operate, driving efficiency, innovation and competitive advantage. Yet as the use of AI grows, so do concerns around fairness, transparency, accountability and privacy. According to IBMsCost of a Data Breach Report 2025”, 97% of organisations that hadexperienced an AI-related security incident lacked proper AI access controls. 

To balance innovation with responsibility, organisations must put strong governance at the centre of AI adoption. A key player in this effort is the Data Protection Officer (DPO). Traditionally viewed as an oversight and compliance role, the DPO is now emerging as a strategic independent advisor guiding responsible AI governance. 

Why AI governance needs the DPO? 

AI systems depend heavily on data, often personal data, for training, prediction and continuous improvement. Managing this data responsibly requires an understanding of privacy laws, ethical principles and organisational accountability.  

Privacy offers a foundational blueprint for responsible AI governance. As global regulations such as the EU’s Artificial Intelligence Act (EU AI Act) evolve, the overlap between data protection and AI oversight is becoming increasingly clear.  

For example, an AI system designed to screen job applicants must comply with both dataprotection requirements under GDPR (lawful processing, fairness, and transparency) and risk-management duties under the EU AI Act (bias testing, human oversight, and traceability). A misalignment between these obligations, such as using historical HR data without consent or bias-testing transparency, could expose organisations to regulatory scrutiny or reputational damage. 

The expanding scope of AI governance continues to bring new challenges for organisations. Many privacy teams are still building AI literacy and developing a technical understanding of algorithms and data pipelines. Others face uncertainty about role boundaries, especially when balancing advisory functions with oversight independence. 

Regulatory fragmentation across jurisdictions also adds complexity. Frameworks in the EU, UK, and other regions differ in focus and maturity, requiring organisations to constantlyimplementand reviewgovernance models thatmust remain flexible and scalable.Addressing these challenges calls for continuous upskilling, cross-functional collaboration, and clear internal definitions of responsibility. The most successful organisations treat AI governance as an evolving capability rather than a one-off compliance exercise.  

The DPO sits at the intersection of privacy and AI oversight. The DPO helps organisations translate complex legal and ethical requirements into practical and operational safeguards. The DPO’s role is therefore strategic as it aligns AI initiatives with organisational values, ethical standards, and long-term business goals.Additionally, the DPO plays a critical role in ensuring regulatory compliance,mitigating risks of enforcement actions which could havesignificant financial and reputationalimplications. 

The DPO’s expanding responsibilities in the AI era 

Embedding privacy by design 

The DPO ensures that privacy is not an afterthought, but a principle built into AI systems from the very beginning. This involves assessing the type of data used for training, ensuring lawful bases for processing, and verifying that AI outputs remain explainable and fair. 

By influencing design decisions early, the DPO helps organisations prevent risks such as bias, over-collection of data, or opaque decision-making before they become systemic issues. 

Leading AI-specific impact assessments 

Conducting privacy impact assessments is necessary to evaluate fairness, transparency, and human oversight in the deployment of AI tools. Such assessments are central to risk-based governance frameworks under both GDPR and the EU AI Act.The DPO is uniquely positioned to guide this process, ensuring assessments are structured, evidence-based, and aligned with both GDPR and the EU AI Act obligations. 

By collaborating with data scientists, privacy, securityand compliance teams, the DPO can help map data flows, identify lawful bases for AI training and deployment, and evaluate whether safeguards such as bias testing, model explainability, and human-in-the-loop mechanisms are in place. This practical oversight ensures that privacy-by-design principles extend beyond compliance paperwork and are embedded into the AI lifecycle itself. 

Building and maintaining governance frameworks 

Effective AI oversight requires a robust governance structure. The DPO contributes by developing or advising on governance frameworks that define accountability, assign responsibilities, and integrate privacy into every stage of AI adoption. 

Ensuring compliance and accountability 

With the EU AI Act and other regional AI frameworks taking effect, organisations must navigate new layers of compliance. The DPO serves as a bridge between internal teams and regulators, ensuring that transparency, documentation, and oversight requirements are met. 

Through regular monitoring and reporting, the DPO helps the organisation demonstrateaccountability, a key expectation in both privacy and AI regulation. 

Strengthening trust and ethics 

Public trust is critical for sustainable AI adoption. The DPO supports transparency by helping organisations communicate clearly about how AI systems make decisions, how personal data is used, and what measures are in place to prevent discrimination or bias. 

Embedding fairness and explainability within AI operations does more than reduce risk;it strengthens the organisation’s reputation and stakeholder confidence. 

Looking ahead 

As AI regulation matures globally, the DPO’s expertise in privacy, accountability, and governance will be indispensable. Organisations with DPO support will be better equipped to innovate safely and maintain stakeholder trust. 

At HewardMills, we believe that responsible AI begins with sound data governance. Our team helps organisations build frameworks, conduct impact assessments, and develop policies that align with both ethical principles and regulatory expectations. 

To learn more about the evolving relationship between data protection and AI governance, read our whitepaper The regulatory landscape and responsible governance of AI  or contact our team for expert guidance.