This month, the European Parliament approved the AI Act, which it called the “first regulation on artificial intelligence.” In the US, Utah passed the AI Policy Act, with some declaring it America’s first private-sector AI law. As organisations increase their use of AI, we need new laws to help manage new risks. But this type of technology has been regulated for decades. 

The right to an explanation 

The EU should publish its AI Act in the Official Journal before the end of June, with the regulation to take effect 20 days later. Two years after that, deployers of “high-risk AI systems” will need to offer individuals a “clear and meaningful explanation” of AI-driven decisions that produce “legal or similarly significant effects” (among many, many other obligations).
Data protection professionals will recognise this language – such a right already exists under the GDPR. A host of other laws also regulate “automated decision-making”, from the Colorado Privacy Act to China’s Personal Information Protection Law. In fact, rights related to automated decision-making were first guaranteed under the EU’s Data Protection Directive, which passed almost three decades ago. 

The importance of transparency and fairness 

Across the Atlantic, Utah’s new AI Policy Act (AIPA), effective from 1 May, requires businesses using generative AI to ensure their chatbots disclose their artificial nature if prompted by a user. But around five hundred miles west, and nearly six years earlier, California’s Bot Disclosure Law imposed a near-identical requirement – long before anyone was using the term “generative AI”.
Using deceptive tactics to sell products or services has been illegal since the earliest consumer protection laws. Indeed, one purpose of Utah’s AIPA is to emphasise that “the use of artificial intelligence (AI) violates consumer protection laws if not properly disclosed.”
A similar message came from the US Federal Trade Commission (FTC) last month, when the agency warned that businesses could be violating the decades-old FTC Act if they “surreptitiously” changed their terms and conditions to use customers’ data for AI training purposes.

Remember the fundamentals 

These are just a few of the countless ways in which long-standing laws and principles apply to new technologies. 
One final example comes from the so-called “HEW Report,” which called for a US federal privacy law in 1973:
“There must be a way for an individual to prevent information about him obtained for one purpose from being used or made available for other purposes without his consent.
Any organisation creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuse of the data.”
Half a century later, regulators continue to repeat the same message – in a very different context. 

HewardMills has data protection and AI specialists who understand the intersection between these critical areas of technology and law. If you’re considering deploying AI within your organisation, get in touch to discuss how to do so in a responsible, legally compliant way. 

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at