The EU Commission has published draft regulations setting out harmonised rules on artificial intelligence (AI) that aim to regulate this rapidly developing area of technology. The regulations take a risk-based approach, identifying AI uses in one of the following categories: (i) an unacceptable risk, (ii) a high risk and (iii) a low or minimal risk.
Included in the first category are AI systems that: deploy subliminal techniques to distort a person’s behaviour; exploit the vulnerabilities of a specific group of people in a way that causes physical or psychological harm; classify the trustworthiness of people based on their social behaviour in a way which leads to detrimental or disproportionate treatment; and use real-time biometric in public spaces for law enforcement, unless required for a specific public safety objective.
The regulations define eight high-risk applications of AI including biometric identification, management of critical infrastructure, systems that determine access to employment, education and asylum. High-risk AI systems must be subject to risk management, human oversight, transparency, record-keeping and appropriate data governance practices as they aim to minimise the risk of algorithmic discrimination and infringement of fundamental rights, including privacy.
Fines for infringement of the rules on unacceptably risky and high-risk AI applications are provided of up to €30 million or 6% of annual global turnover, whichever is higher. The regulations also provide for the creation of a European Artificial Intelligence Board, which will be comparable to the European Data Protection Board.
Dyann Heward-Mills, CEO of HewardMills and European Commission ethics adviser, said: “The draft regulations make it clear that companies developing and deploying AI must uphold the highest ethical standards. The European Commission has moved to put privacy and consumer protection front and centre of the coming AI revolution in line with its ambition to create an ecosystem of trust around AI. Of particular note is the need for human oversight and appropriate data governance for high-risk AI applications. This represents an opportunity for independent and qualified Data Protection Officers (DPOs) and privacy practitioners to play a critical role.”