The European Council adopted the AI Act on 21 May, marking one of many steps on the path towards stronger AI regulation. The law will be published in the EU’s Official Journal in the coming weeks and will start to take effect 20 days later. The AI Act’s first substantial impact will come six months after publication, when certain “prohibited AI practices” will be banned throughout the EU. The bulk of the law takes effect within 24 months, when most “high-risk AI systems” start to face tighter rules. But the EU’s headline-grabbing regulation isn’t the only important AI law. Here’s a look at three other significant AI legal developments that occurred this week.
Colorado passed its own AI act
On 17 May, Colorado passed the Colorado Artificial Intelligence Act (SB 24). While shorter and simpler than the EU’s AI Act, Colorado’s new law will impose some similar obligations on developers and deployers of “high-risk AI systems” from February 2026. Both developers and deployers of high-risk systems must take reasonable care to avoid “algorithmic discrimination” based on age, ethnicity, or other protected characteristics.
For AI developers, this means providing extensive documentation about how a system was trained, what risks it carries, and how it should be used, and notifying the Attorney General if algorithmic discrimination is likely to occur.
Companies using high-risk AI systems must implement a risk management framework such as NIST’s AI RMF or ISO 42001, provide transparent information to consumers, and complete an annual “impact assessment”, among other obligations. If signed by the state’s governor, the Colorado AI Act will be one of the world’s most extensive efforts to regulate private sector AI systems.
The UK regulator dropped an enforcement threat against Snap’s generative AI product
The UK’s Information Commissioner’s Office (ICO) has decided not to pursue enforcement against social media firm Snap over its “My AI” feature, a generative chatbot integrated into Snapchat. After issuing a preliminary enforcement notice against Snap last October following a four-month investigation, the ICO said it was satisfied Snap had taken the necessary risk assessment and mitigation measures to avoid a sanction. The ICO describes its investigation as a “warning shot for industry” and has since announced that it will be “making enquiries” with Microsoft regarding its new AI product, Copilot Recall.
Scarlett Johanssen threatened OpenAI with legal action
OpenAI has been accused of impersonating the voice of actor Scarlett Johanssen in its text-to-speech chatbot, GPT-4o. OpenAI CEO Sam Altman approached Johanssen last year to ask if she would provide her voice for the app, saying he was a fan of the 2013 film Her, in which Johanssen voices an AI. Although she declined OpenAI’s offer, Johannsen says one of GPT-4o’s voices, “Sky”, sounds so “eerily similar” to hers that even “closest friends and news outlets could not tell the difference.”
Back in 1988, Ford Motors asked singer and actor Bette Midler to feature in one of its ads. Midler declined, and Ford hired an impersonator instead. The US Court of Appeals found that Midler’s voice was an integral part of her identity, and Ford needed permission to impersonate it. OpenAI says it did not intend to mimic Johanssen’s voice and has pulled the “Sky” option from its app. Whether or not the case ends up in court, it’s a reminder of how the law already affects so many aspects of AI.
When implementing or developing new technology, risks arise in unexpected ways. HewardMills supports clients to take an informed and ethical approach.