The European Commission’s proposed Artificial Intelligence (AI) Act attempts to regulate a wide range of AI applications, aligning them with EU (European Union) values and fundamental rights through a risk-based approach. 
The AI Act focuses on four specific types of use or practice for an AI system: 

  1. Prohibited AI practices and systems, such as exploitive AI; 
  2. High-risk AI, such as systems used as safety components in critical infrastructure or components of products that are regulated by harmonised EU legislation, such as components of medical devices;
  3. AI systems characterised by transparency issues, such as AI systems intended to interact with humans (chatbots); 
  4. Non-risk AI practices, which are covered by the AI Act but not subject to its mandatory requirements. 

High-risk AI systems, such as components of medical devices, may only be used under conformity procedures following a review by a notified body through which the system is granted a CE marking. A CE marking is only granted if the system and the users, implement and maintain risk management throughout the AI system’s lifecycle. This includes, record-keeping, risk monitoring and human oversight, with a strong focus on robustness, accuracy of data and security.  

GDPR and the new AI Act

The new AI Act will take some adjustment time like the General Data Protection Regulations (GDPR). However, unlike the adaptation of the GDPR, many organisations within the sector will be prepared as the AI Act will impose similar requirements and restrictions to the GDPR. These include; organisations assessing their data processing, applying a risk-based approach to data processing, ensuring continuous compliance throughout the processing cycle especially when data is being processed outside of the EU/EEA. 

1. There are various similarities between the AI Act and the GDPR, these will aid the implementation of the AI Act, their likeness includes: Assessment of application  

It is key to evaluate when a business is or will be affected by regulatory requirements 
An essential aspect of the GDDPR is risk management,the AI Act suggests a similar responsibility for organisations and requires them to analyse relevant software to ascertain whether they will be affected by the AI Act. This will enable organisations to highlight and implement solutions for software and technologies that use prohibited or high-risk practices and ensure the systems are used in accordance with the AI Act. 

2. Risk-based approach 

The use of AI will depend on a risk-based scheme under the AI Act; the more risk the systems bring, the heavier the restrictions applied. This is similar to the GDPR, which requires data controllers to perform impact assessments for high-risk processing of personal data. The riskier the processing activity is, the less likely it is to be deemed lawful and be compliant with the GDPR and the proposed AI Act.  

3. Organisations must sufficiently document and monitor their systems 

Under the AI Act, users are required to consider risk management all through the process, including record-keeping, risk monitoring and human oversight. The GDPR similarly prescribes an obligation to keep a written register of the processing of personal data in certain cases, ensure built-in data protection and data privacy by design and default and conduct an impact assessment regarding high-risk personal data processing. Both instances require robust knowledge of the products and internal processing, as well as established internal procedures, structures and workflows to facilitate and maintain efficient day-to-day compliance. Compliance with the GDPR will assist organisations in becoming compliant with the AI Act in this regard.  

4. Third-country application 

Like the GDPR, the AI Act will have certain third-country applicability for the use of AI whose output affects citizens in the EU. A typical scenario could be an organisation processing EU patients’ data in a HealthTech AI system running on servers outside the EU, and the decisions made by the AI affect these EU patients. Before outsourcing such processing, the business must conduct an impact assessment of the intended use of AI.  

How does the AI Act affect the life sciences industry?

AI has transformed the life sciences industry in unimaginable ways. It now impacts each stage of a life science product’s lifecycle from research and drug discovery to clinical trials, manufacturing, supply chain logistics, marketing and sales. As one of the most highly regulated industry sectors in which there is a particularly close relationship between product safety and health outcomes, life sciences has traditionally and understandably, approached the adoption of new AI-driven technologies with a large degree of caution. Despite this, the sector has now embraced innovation due to the wealth of opportunities for AI applications. Companies are beginning to use AI to automate existing processes across the entire life sciences value chain. However, with these opportunities come new risks. The increased use and application of AI in life sciences inevitably raises questions as to its impact on the legal and regulatory risk landscape. 

If you are operating in the life science sector and are using heavily automated or high-risk automated data processing techniques and are concerned about the potential impact of the AI Act on your organisation, HewardMills can help. We have several Data Analysts and Consultants based throughout Europe who can advise on EU directives and local requirements pertaining to the AI Act. 

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at dpo@hewardmills.com.