Cyberattacks continue to pose a formidable threat to organisations across the world. In recent times, we have witnessed a surge in high-profile breaches, each one leaving a trail of compromised personal data. Take, for instance, the infamous “Mother of All Breaches” (MOAB) in January 2024, which reportedly exposed over 26 billion sensitive records. More recently, the Snowflake breach reminded us of the relentless and evolving nature of cyber threats.

These cyberattacks do not only affect targeted organisations—they have far-reaching implications for millions of individuals whose personal data has been compromised. Data protection officers need to remain ever alert to any changes in regulatory requirements in the management of cyber processes, to minimise the risks to the businesses they oversee.

As cyber threats continue to evolve, adopting intelligent cybersecurity and privacy measures has become important. To stay ahead of these threats, some businesses are beginning to leverage the power of Artificial Intelligence (AI) to detect, respond to, and prevent cyberattacks effectively. Machine learning, advanced analytics, and automation are transforming the cybersecurity landscape. For data protection officers, understanding the risks posed by new technology to personal data needs to be continually re-evaluated and guardrails put in place to adhere to regulations.

In some situations, AI can empower businesses to enhance their security processes, identify vulnerabilities swiftly, reduce response times significantly, and allocate resources more efficiently. But this can also bring unknown privacy risks.

Ethical challenges in AI-driven cybersecurity

Undoubtably, the integration of AI in cybersecurity brings many benefits. However, it also comes with significant challenges that need to be addressed to ensure responsible use. These include:

  • Bias and Discrimination: AI systems can inadvertently perpetuate biases present in their training data, leading to skewed outcomes. If the data used to train the AI is corrupted, the system might disproportionately flag certain groups or overlook specific vulnerabilities.
  • Privacy Concerns: AI’s extensive data collection for threat detection purposes may infringe on personal privacy. The collection and analysis of sensitive information raises significant privacy issues.
  • Accountability and Transparency: The “black box” nature of AI technologies makes it difficult to understand and explain decisions. If an AI system fails, assigning responsibility can also be challenging.

Developing governance frameworks for AI in cybersecurity

The ethical challenges identified with the use of AI for cybersecurity highlight the need for a robust organisational governance framework. This framework should represent a structured set of rules, policies, standards, and best practices regulating the use of AI technologies in cybersecurity within the organisation. The following are key considerations for organisations:

  • Evolving AI Regulations: The regulatory landscape for AI is rapidly changing. The European Commission’s EU AI Act is set for phased implementation over the next three years. In the US, at least 40 states introduced AI bills in the 2024 legislative session, with 6 states adopting resolutions or enacting legislation3. To stay compliant, it is crucial to ensure your governance framework is adaptable and reviewed regularly to align with new laws and developments.
  • Leadership oversight and reporting obligations: It is important to consider how to keep the leadership team apprised of key developments relating to deployment and use of AI in cybersecurity within the organisation. Experts and external advisors such as DPO service providers can also be invited to provide feedback to leadership regularly.
  • Dedicated Oversight: Organisations should consider setting up a cross-functional team dedicated to overseeing AI policy development in cybersecurity. Such a team can help prevent inconsistencies between departments such as HR, IT, and legal, ensuring a cohesive approach. Their duties could include monitoring AI system usage, staying current with technological and regulatory changes, and training relevant employees on responsible AI use.
  • Decision Making: To ensure accountability, consider assigning specific responsibility for the adoption and use of AI in cybersecurity to specific individuals or departments to avoid unclear overlapping responsibility.
  • Risk Management: Prior to deploying AI for cybersecurity purposes, it is important to establish an internal process for identifying, assessing and controlling associated risks. Risk management is particularly relevant in the areas of data protection and consumer protection.

In the face of evolving cyber threats, integrating AI into cybersecurity is both essential and challenging. As the UK National Cyber Security Centre points out, AI lowers the barrier for novice cybercriminals, hackers-for-hire, and hacktivists, making it easier for them to conduct sophisticated attacks4. Therefore, AI will indispensably play a key role in supporting cybersecurity efforts against these emerging threats. While the ethical issues surrounding AI use remain a significant concern, developing a robust governance framework can ensure AI is used responsibly, fairly, and transparently.

HewardMills is ready to partner with and support your organisation’s needs to develop a robust AI governance framework and tackle challenging ever-evolving AI regulatory requirements. Contact our team if you want to discuss implementation of a governance framework for using AI in cybersecurity.

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at dpo@hewardmills.com.