The following are brief summaries from regulators around the world. Some of the measures are yet to come into force and Data Protection Officers must remain alert to the timings and changes, and be prepared to implement processes that enable compliance within the organisations they oversee.
China tightens regulations on facial Recognition and AI-Generated content
In a significant move to strengthen data protection and govern the ethical use of emerging technologies, Chinese regulators have issued measures, the Administrative Measures for the Application Security of Facial Recognition Technology, targeting facial recognition and AI-generated synthetic content. These regulations, which have been jointly developed by key agencies including the Cyberspace Administration of China (CAC) and the Ministry of Public Security, reflect a broader push to enhance privacy, transparency, and digital trust will come into effect from 1 June 2025.
Key provisions include:
- Purpose limitation and necessity: facial recognition may only be deployed for clearly justified purposes with minimal intrusion on individuals' rights
- Notice and consent: data subjects must be fully informed, with explicit consent required—particularly when processing minors' data
- Alternatives mandated: where feasible, alternative identification methods must be offered.
- Data minimisation and localisation: facial data should typically be stored on-device, not transmitted online, and retained only as long as necessary
- Sensitive areas protected: use of facial recognition is prohibited in private spaces within public venues (e.g. hotel rooms, restrooms)
- Security and registration: systems must implement strong security measures and register with authorities if storing over 100,000 records
- Impact assessments: a Personal Information Protection Impact Assessment (PIPIA) must be conducted prior to deployment, with records retained for at least three years
These measures apply broadly to all entities processing facial data in China, with exceptions for purely research or algorithm training purposes. Additional measures apply for vulnerable individuals, including minors.
Kenya launches national AI strategy for 2025-2030
Kenya recently launched its first official National Artificial Intelligence Strategy (2025-2030), which sets out a government-led vision for ethical, inclusive, and innovation-driven AI adoption across the country. The government has committed significant resources to support foundational investments in policy, infrastructure, and capacity-building.
While existing laws such as the Data Protection Act 2019, Computer Misuse and Cybercrimes Act 2018, Intellectual Property (IP) laws, and Consumer Protection Act 2012, offer some guidance, they are currently insufficient to address AI’s complexities. The strategy articulates policy ambitions that will be of interest to global companies developing, deploying, or investing in AI technologies across Africa.
The strategy is anchored on three main pillars: building robust AI digital infrastructure; establishing a sustainable data ecosystem; and fostering AI research, innovation, and commercialisation. Supporting these pillars are four cross-cutting enablers: governance (with a focus on ethical, legal, and regulatory frameworks), talent development (integrating AI literacy into education and workforce training), strategic investment, and a strong commitment to ethics, equity, and inclusion.
Kenya’s approach emphasises collaboration among key stakeholders including the private sector, academic institutions and international partnerships with the European Union and the UK’s Foreign, Commonwealth & Development Office.
German companies urged to delete outdated data and align with new 2025 retention laws
The German regulator recently reminded businesses, organisations and companies to review and delete outdated personal data, in line with the GDPR and updated German legal requirements. From 2025, new retention rules under Germany’s Fourth Bureaucracy Reduction Act shorten the retention period for certain documents—particularly accounting records, which must now be deleted after eight years instead of ten.
Beyond statutory timelines, businesses are also required to establish internal deletion schedules for personal data that no longer serves a purpose, even where no legal retention period applies. These schedules must define clear retention periods based on data type, business needs, or limitation periods, and ensure actual deletion follows.
Regulators, such as Hamburg's Data Protection Authority (HmbBfDI), are focusing enforcement on data retention and deletion practices. Recent fines, including a €900,000 penalty for inadequate data deletion, highlight the consequences of non-compliance. Across Europe, data deletion is also the focus of coordinated regulatory audits, as regulators continue to emphasise that consistent deletion reflects sound data compliance and protects individuals' rights.
Spain proposes Bill to align national AI regulation with EU AI Act
The Spanish government has announced a proposed Bill to harmonise Spain’s legal framework with the EU’s AI Act, aiming to ensure AI is used ethically, inclusively, and responsibly. The EU AI Act establishes a decentralised penalty framework for AI systems, detailing explicit conditions for administrative fines concerning infringements of specific provisions. The Bill therefore positions Spain as one of the first EU countries to implement the penalty framework set out in the EU AI Act.
The Bill introduces strict penalties for violations, particularly relating to banned AI practices and high-risk systems. Oversight will primarily be handled by Spain’s newly created AI supervisory agency (AESIA), working alongside sector-specific bodies like the Spanish Data Protection Agency, the Bank of Spain, and the Central Electoral Board.
The Bill mirrors the EU AI Act’s focus on banning AI applications deemed unacceptably risky, such as manipulative subliminal techniques or biometric classification based on sensitive attributes. Fines for violating these bans could reach up to €35 million or 7% of global turnover, with Spain’s Bill mirroring these penalties.
High-risk AI systems, including those used in medical, industrial, financial, or biometric contexts, must comply with strict transparency, safety, and human oversight requirements. Failure to do so could attract fines ranging from €500,000 to €15 million, depending on the severity of the offence.
AESIA is also tasked with supporting AI innovation, particularly through the use of AI sandboxes for testing high-risk systems before deployment. The draft Bill will now undergo a legislative process before being submitted to Parliament for approval.
HewardMills’ global team of data protection and privacy experts continuously monitors the horizon for emerging regulatory changes and is ready to support you to input measures that keep your business compliant.