The increasing use of AI chatbots to complete tasks in a much faster and efficient way is giving rise to data breach risks. According to the Dutch Data Protection Authority, recent cases of employees entering sensitive and personal data into AI chatbots exposed both individuals and organisations to significant privacy risks and urges stronger safeguards.
One of the biggest causes behind a rise in breaches caused by use of chatbots is often a lack of clear guidelines implemented by employers on best practice around their use. In one of the cases reported to the Dutch DPA, an employee from a medical practice had entered patients’ medical data into an AI chatbot, contravening the employer’s instructions and in the process, compromising sensitive information. Medical data is legally protected under the GDPR and sharing them without stringent safeguards constitutes a violation of the law. In a separate case, a telecom company reported a breach after an employee entered customer addresses into a chatbot.
Clearer guidelines on use of chatbots is key
One key risk to using AI tools is the storage of data entered, often without the knowledge of the employer or owner of the data. Once data is in the server, the chatbots ultimately are then able to gain unauthorised access to a range of personal and sensitive data, which violates personal data protection under the GDPR.
The Dutch DPA’s guidance urges organisations to ensure clearer guidelines from employers on using AI chatbots as well as provisions within contracts with AI chatbot providers making it crystal clear that entering and storage of sensitive data is restricted. In addition, often consulting or working closely with a data protection officer can help organisations review their data collection and management processes to ensure their approach remains within the requirements of local laws.
As well as awareness of long-standing regulations and familiarity with the EU AI Act, employers should also ensure regular training of employees around the use of AI, as the technology is constantly evolving and can easily fall outside the scope of data protection regulations.
HewardMills works closely with its clients to ensure compliance with AI regulations across global jurisdictions, can support in instances of breaches and help with reporting to the regulator. Take a look at our latest guidance here:
https://www.hewardmills.com/how-privacy-teams-can-prepare-for-the-eu-ai-act-coming-into-force/