Data Protection Day provides a timely opportunity to reflect on how organisations are responding to the increasingly close relationship between data protection, AI governance and workplace decision-making. As AI tools become embedded across business functions, questions of accountability, transparency and risk are no longer abstract. They are practical issues that arise in day-to-day operations and require active management. 

Against this backdrop, HewardMills convened a closed-door AI governance roundtable with Faegre Drinker on 15 January. Hosted by our CEO, Dyann Heward-Mills, the session brought together senior leaders from privacy, legal, compliance and HR functions to discuss the evolving UK, EU and U.S. AI regulatory landscape, and to share experiences of governing AI use in complex organisational environments. 

The discussion was intentionally grounded in implementation. Participants explored how AI governance is being operationalised in practice, the pressures that emerge as AI use scales, and the organisational and cultural challenges that arise when AI is introduced into sensitive workplace contexts. 

The reflections below capture some of the key themes from that conversation and offer insight into how organisations are approaching AI and privacy governance in practice. 
 
AI  and privacy governance  

Participants agreed that effective AI and privacy governance requires buy-in across the organisation, both from senior leadership and operational teams. Successful approaches emphasise: 

  • clear ownership and accountability at the top of the organisation 
     

  • investment in dedicated roles such as Head of Responsible AI or AI champions, with defined responsibilities and performance-linked objectives 
     

  • embedding governance into day-to-day decision making rather than treating it as a purely legal or compliance exercise 
     

A range of governance models were discussed, including the use of multidisciplinary AI ethics or governance boards, typically involving legal, compliance, technology, product and engineering functions. These forums are used to review, challenge and approve AI use cases before deployment. 

Sustaining governance at scale 

A key challenge highlighted was how to sustain governance as AI use expands. Many organisations are struggling to triage and review the growing volume of AI use cases, with participants noting that it is often unrealistic to review every use case centrally. In response, some organisations have adopted more flexible or ad hoc AI governance forums, convened with relevant specialists (e.g. HR or Marketing) as needed to meet demand and address higher-risk use cases.  

Operational and cultural challenges  

While AI offers clear efficiency gains, participants noted that it can also introduce new risks and frictions. Examples included: 

  • Use of AI leading to increased inefficiencies as tools are not yet at the level of sophistication required to support complex workplace activities such as interviewing. This can lead to inadequate/false-positive and false-negative outcomes. Examples given include nuances in dialect and subtle gestures that would be picked up by a human-to-human interaction, whilst often missed by AI  

  • AI being used as a knowledge or decision support tool without sufficient context or critical judgement 

  • Increased pressure on General Counsel and Compliance teams, who often have deep contextual understanding but are required to spend significant time responding to challenges or assertions generated by AI-driven outputs used by the business or counterparties 

  • Similar challenges within HR teams, particularly in recruitment, where some participants reported that AI tools are not consistently accurate in identifying suitable candidates or matches for open roles 

  • In some contexts, there seemed to be a strong push for the uptake of AI in support of AI fluidity within organisations and low tolerance for being “negative” to AI 

Impact on hiring and talent development 

The discussion also explored the longer-term implications of AI for hiring and workforce development. Participants questioned whether traditional junior roles and career pathways remain fit for purpose in an environment where AI can perform many tasks that were previously part of early-career learning, such as research and information synthesis. This raised broader questions around: 

  • How organisations train and develop junior talent 

  • What experience and skills should be prioritised in early-career roles 

  • Whether expectations of progression and learning need to be fundamentally rethought in an AI-enabled workplace 

Emerging skill demands? 

As the discussion turned to some of the less desirable results of widespread AI use in the workplace (“AI slop”, skill degradation, over-reliance, AI-driven insecurity), participants also reflected on how much more meaningful human-centric work was becoming. High value is increasingly being placed on work and activities such as face-to-face relationship-building, persuasion, and communication of complexity and nuance. These are all skills that professionals need to inculcate in junior talent to help them be competitive, as well as skills that some participants reported already spending more time on.  

Global challenges 

Participants reflected on the privacy and data protection impact assessments in relation to the use of AI-enabled technologies in the workplace. Many shared common experiences of carving out certain regions, e.g. Europe or jurisdictions, e.g. Germany, as out of scope with respect to complex AI technologies. The need for protracted works council discussions and approvals was cited as a major challenge to the global rollout of AI technologies. 

Practical solutions – How  to ACE workplace AI deployment! 

  • Accountability: Ensure a clear and documented understanding of the risk and benefit trade-offs. Understand what is “good enough” and what is “unacceptable” and who has ownership of risk. 

  • Consultation: Early and ongoing consultation with impacted stakeholders, such as employees and candidates, to build trust in the AI tools and processes  deployed in the workplace is highly recommended. This can be achieved through direct consultation and via appropriate transparency notices. 

  • Expertise: The support of independent and skilled experts, such as DPOs and external counsel, in navigating complex global requirements is invaluable. 

At HewardMills, we work with organisations to design and embed pragmatic AI and data protection governance frameworks that operate effectively in complex, real-world environments. If you have a question, are reassessing how AI is governed in your organisation, or would simply welcome an opportunity to exchange views on these issues, our team would be pleased to hear from you.