As a security measure, nothing beats biometrics for convenience. We carry our fingerprints and facial features everywhere we go. Unlike a password, your iris print is always unique—and you can’t forget it. But biometrics are sensitive because of their convenient and immutable nature. You can change a breached password in minutes, but changing your biometrics is practically impossible. And while the technology is growing in popularity and accuracy, governments seem increasingly determined to tightly regulate its use. 

Is biometric identification biased? 

Facial recognition systems can cause discrimination, particularly when trained on biased datasets that underrepresent people from marginalised groups. But according to one recent study, facial recognition technology appears to be getting more reliable.

A 2019 NIST study of 189 facial recognition algorithms found that “Asian and African American” faces were between 10 and 100 times more likely to register false positives than white faces. A 2024 study, also by NIST, showed roughly equal accuracy across demographics for the top 100 algorithms—with some even showing a very slight bias against white men. 

Advances in AI partly explain the apparent improvement in biometric identification systems. But regulatory pressure could also be a factor. 

Regulatory crackdown on biometric technology 

Due to the inherently risky and sensitive nature of biometrics, regulators have been particularly active in enforcing the law against developers and users of facial recognition systems. 

In December 2023, the US Federal Trade Commission (FTC) brought action against the retail chain Rite Aid after its allegedly “reckless” use of facial recognition caused its customers “humiliation and other harms.” The company was ordered to destroy its biometric data, photos, videos, and any models or algorithms derived from them. Since 2021, regulators worldwide have been pursuing Clearview AI, a company that operates a database of billions of facial recognition profiles. Facing at least five fines under the GDPR, the company has pulled out of Europe (reportedly without paying any of the fines). 

OpenAI CEO Sam Altman’s biometrics project, Worldcoin, is also facing investigation across multiple jurisdictions. The company drew regulators’ attention due to its policy of offering £50 worth of cryptocurrency in exchange for people’s iris prints. 

New laws on biometric data 

In addition to strong enforcement of existing laws, governments are passing new legislation to try to keep up with the proliferation of biometric technology. This isn’t a recent trend. When US states began passing data breach notification laws in 2003, they rarely covered biometric information. In the decades since, most states have amended their laws to require businesses to report biometric data breaches. 

States such as Illinois, New York, and Texas have enacted more substantial laws specifically regulating biometrics. And each of the 18 comprehensive state privacy laws passed since 2018 classes biometric information as “sensitive data”. 

The EU’s AI Act will also have significant implications for biometrics, particularly regarding biometrics systems used to “categorise” people or identify them in real time. But biometric technology continues to develop despite—or perhaps due to—these legal and regulatory pressures. 

With expertise on emerging technologies and the latest legal developments, HewardMills can help you ensure any use of biometrics is fair, secure, and legally compliant. 

If you would like to discuss this topic or anything else data protection and privacy-related, please contact us at