Now that the EU AI Act has entered into force, an increasing need for integrating Environmental, Social, and Governance (ESG) principles into AI frameworks is becoming more evident. These principles can be foundations for building greater public trust in AI system use as they provide a framework for greater transparency and corporate social responsibility.
As AI continues to be increasingly part of automated decision-making processes, bias and discrimination can be commonplace, especially due to the collection of sensitive personal information. As data protection officers (DPOs) keep pace with developments in AI governance, we explore the additional benefits of understanding the role that the Social and Governance elements of ESG play in strengthening privacy programmes.
Balancing innovation with privacy protection
Arguably AI has enabled companies to streamline technological processes and allow business processes to be carried out more efficiently than ever before. That said, businesses must assess the potential downsides of over-reliance on AI to the detriment of failing to comply with regulatory requirements and the potential loss of public confidence that this may result in.
AI’s potential to perpetuate biases or manipulate decisions, such as in recruitment or financial assessments, poses great ethical concerns, including data ownership and how it should be used. While the GDPR is the basis for most existing data protection and privacy frameworks, it does not account fully for emerging AI technologies. This is where ESG comes in – offering an additional framework to help to bridge the gaps between existing regulations and emerging challenges to privacy.
Working with DPOs to plug the AI ethical gap
Here are some ways that ESG frameworks can address some of the emerging privacy gaps in AI:
- Risk Assessment: A comprehensive risk assessment should be carried out prior to implementing any AI system. Particular attention should be given to how any such system handles sensitive personal data. Bias in AI is often cyclic in that when it is not addressed in the early stages, it can be reaffirmed by AI systems that are built on biased data. This is of utmost importance where sensitive information relating to racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic, biometric or health data or data concerning a natural person’s sex life or sexual orientation are involved. It will be important to assess how that information is being used and whether it is going to be re-used in future and for what purpose. Transparency with individuals about those future uses will be critical. Consideration should also be given to the volume of information needed to train AI models and how this can be reconciled with data minimisation requirements in privacy legislation.
- Roles and Responsibilities: A robust ESG framework should set out clear roles and responsibilities. Risk assessments may be useful to map these. For example, while all team members are responsible for handling sensitive personal information with care, the role of a Data Champion can help to drive leading practices and empower others.
- Bias Testing and Audits: Regular audits of how AI systems are operating in practice can help in identifying and correcting biases and discrimination. This shows a commitment to improving systems, preventing discrimination and ensuring ongoing compliance with data protection obligations.
- Training and Awareness: Risk assessments, testing and audits work hand in hand with training. Focus areas and specific challenges may be identified to form the basis of bespoke training. Data Protection Officers, for example, could provide training on the close link between privacy and AI. Data Champions may promote awareness of how team members can contribute to ethical AI use.
AI governance is a fledgling space that the regulators and DPOs are closely tracking to safeguard personal data. HewardMills keeps track of these developments and offers robust frameworks that address the gaps between ESG and AI. As an external DPO, we can support with risk assessments from a privacy perspective and support businesses with risk mitigation to improve stakeholder trust.