Artificial Intelligence (AI) agents, such as chatbots and virtual assistants, have become popular elements in organisational operations due to their capacity to automate day-to-day business processes. However, recent incidents have spotlighted significant privacy concerns associated with these emerging platforms. While AI agents promise transformative benefits for operational efficiencies, their reliance on vast personal and sensitive data sets creates unprecedented privacy risks. The role of DPOs in supporting the business to mitigate these risks, especially as AI regulation emerges globally, has become more critical than ever.  

Customer data collection and analysis 

AI agents are powered by vast data sets, often gathering user data, including purchase histories, browsing behaviours, emails and personal interactions to enable their functionality. The DPO’s critical role emerges in validating the legal basis for each data point collected, particularly under the EU AI Act Article 13’s strict transparency requirements for high-risk systems. Without clear legal basis or user consent, this extensive data collection can lead to compliance issues with regulations like the GDPR and the California Consumer Privacy Act (CCPA). For instance, Italy’s ChatGPTban in 2024 illustrates the regulatory risks when data collection lacks transparency or valid consent mechanisms. 

Data storage and access control 

Inadequate encryption protocols and weak access controls can expose stored data to unauthorised access and breaches. DPOs guide their organisations to mandate encryption protocols that adapt to real-time processing, as required by the EU AI Act’s cybersecurity provisions (Annex III). Emerging regulatory frameworks such as the EU AI Act are designed to ensure adequate measures to protect personal data, and failure to implement such safeguards can result in significant penalties. 

Data sharing with third parties 

When an AI agent shares data with cloud providers or analytics platforms, the DPO works with the business to address, and most importantly minimise, supply chain vulnerabilities. 

The EU AI Act’s Article 28 requires formal audits of third-party AI components. Also, this practice can violate principles of purpose limitation and data minimisation outlined in data protection regulations, increasing the risk of privacy breaches.  

Building a watertight AI governance action plan 

The evolving regulatory environment necessitates proactive compliance measures from organisations deploying AI agents. For instance, on 11 March 2025, Spain introduced legislation imposing substantial fines on companies that fail to label AI-generated content appropriately, aiming to curb the spread of "deepfakes" and ensure transparency.Article 14 of the EU AI Act mandates human monitoring of high-risk AI systems.  

In response, DPOs can support organisationsto implement three key actions to ensure effective AI governance: 

  • Establishment of continuous audit trails documenting every AI training dataset’s provenance and legal basis. Also, ensuring mandatory Data Protection Impact Assessments (DPIAs) are conducted before deploying AI agents, mapping data flows and third-party dependencies.  

  • Implementation of real-time abnormal detection such as flagging when a chatbot starts requesting unnecessary or sensitive personal data, like health informationis an essential practice to embed as part of the privacy operations.  

  • Formalisation of cross-departmental review boards involving IT, legal, and privacy teams as the key to good governance. These review boards should be responsible for collectively approving updates or changes to AI models, ensuring all perspectives are considered in decision-making. 

The road ahead 

Ignoring privacy in AI deployments weakens trust and puts organisations and businesses at risk of regulatory penalties. With the EU AI Act’s full enforcement beginning in 2026, privacy teams must treat 2025 as their implementation yearadopting proactive compliance measures with the support of a DPO can ease much of the burden when the Act becomes fully operational.  

HewardMills’ team of global DPOs and privacy experts is here to provide critical support in managing the complexities of using AI agents to yield operational efficiencies while meeting regulatory requirements effectively.