As 2026 begins, the United States privacy and AI regulatory landscape continues to evolve. With multiple state-level privacy laws and workplace AI obligations now in force, many organisations need to operate within a fragmented framework shaped not by a single national standard, but by state-specific priorities and enforcement models.
Kentucky, Rhode Island, Indiana, Illinois, and California have each introduced new or amended legislation effective from 1 January 2026 or shortly thereafter. While these frameworks share common themes such as consumer rights, transparency and accountability, each contains distinct obligations that demand tailored compliance strategies.
This blog explores the requirements of these new laws and how organisations can prepare.
Kentucky’s Consumer Data Protection Act (KCDPA)
Effective 1 January 2026, Kentucky’s amended Consumer Data Protection Act adds to the growing roster of U.S. state privacy frameworks.
A familiar structure with distinct implications
The KCDPA applies to organisations that either process personal data of 100,000 or more Kentucky residents annually or derive more than 50% of revenue from the sale of personal data while processing data of at least 25,000 residents. This volume-based scoping model places particular focus on consumer-facing business models rather than universal coverage.
Kentucky residents benefit from rights of access, correction, deletion, portability, and opt-out from targeted advertising or data sales. However, the law contains no private right of action. Enforcement authority rests exclusively with the Kentucky Attorney General, with civil penalties of up to $7,500 per violation.
The statute includes a 30-day cure period, providing businesses with an opportunity to remediate alleged breaches before enforcement. However, while helpful in the early stages, this should be treated as a limited margin for error, not a substitute for readiness.
Rhode Island’s Data Transparency and Privacy Protection Act (RIDTPPA)
Rhode Island’s law, also effective 1 January 2026, expands the reach of U.S. state privacy obligations with notably lower applicability thresholds.
A wider net and stricter enforcement posture
The RIDTPPA applies to organisations processing data of 35,000 Rhode Island residents annually, or 10,000 residents if data sales constitute at least 20% of revenue. These thresholds bring many mid-sized organisations into scope for the first time.
The law introduces enhanced disclosure obligations around the categories of personal data collected, purposes of processing, and data sharing or sale practices. Again, enforcement powers sit solely with the Attorney General, but unlike Kentucky, Rhode Island provides no statutory cure period, enabling immediate regulatory action where non-compliance is identified.
Furthermore, affirmative consent is required to process sensitive personal data, reinforcing the state’s emphasis on consumer autonomy and increasing compliance obligations for higher-risk datasets.
Indiana Consumer Data Protection Act (ICDPA)
Indiana’s ICDPA completes the trio of state privacy laws that entered into force on 1 January 2026.
Similar design, distinct execution
The ICDPA applies to organisations processing data of 100,000 Indiana residents, or 25,000 residents where over half of revenue derives from data sales. It excludes nonprofits, Gramm-Leach-Bliley Act (GLBA)-regulated entities, HIPAA-covered organisations, and universities.
Sensitive data processing requires explicit consent, raising the bar for profiling, behavioural analytics, and targeted advertising. As with Kentucky, enforcement authority lies with the Indiana Attorney General, with penalties up to $7,500 per violation and a 30-day cure window following written notice.
Illinois Workplace AI Law
Illinois has become one of the first U.S. states to introduce binding obligations for employers using AI in the workplace.
Bringing AI governance into civil rights enforcement
Effective 1 January 2026, amendments to the Illinois Human Rights Act prohibit the use of AI systems that result in discriminatory outcomes in hiring, promotion, termination or other employment decisions. Employers must notify employees and applicants when AI tools influence such decisions, and the use of proxies such as zip codes to infer protected characteristics is explicitly banned.
By embedding AI oversight into anti-discrimination enforcement, the law makes algorithmic governance a legal requirement rather than an ethical aspiration.
California’s ADMT Rules under the CCPA
California’s Automated Decision-Making Technology (ADMT) framework represents one of the most advanced AI governance regimes currently embedded within U.S. privacy law.
On 23 September 2025, the California Office of Administrative Law approved the final regulations proposed by the California Privacy Protection Agency (CPPA). These regulations, effective 1 January 2026, do far more than regulate automated decisions. They introduce a new compliance architecture that combines privacy risk assessments, cybersecurity audits, and consumer-facing transparency obligations into a unified accountability model.
A phased compliance programme with defined milestones
From 1 January 2026, applicable businesses must conduct and document privacy risk assessments for high-risk processing activities and be prepared to submit this documentation to the CPPA by 1 April 2028.
The ADMT-specific obligations take effect from 1 January 2027. From this point, businesses using personal data to make or materially support “significant decisions,” such as in employment, credit, housing, healthcare, insurance or access to essential services, must provide enhanced pre-use notices, explain system logic in plain language, enable access to meaningful information about decisions, and honour consumer opt-out rights that trigger human-led alternatives.
The framework also introduces mandatory cybersecurity audit certification requirements, which scale by organisational revenue:
-
Businesses generating over $100 million annually must submit certifications by 1 April 2028
-
Businesses earning $50–100 million must submit by 1 April 2029
-
Businesses earning under $50 million must submit by 1 April 2030
How the DPO function supports parallel compliance
A well-positioned Data Protection Officer (DPO) function helps organisations move beyond jurisdiction-by-jurisdiction firefighting and instead build a coherent, multi-layered compliance model that can absorb new laws as they emerge. The DPO also plays a central role in integrating privacy, AI governance and cybersecurity. For example, California’s ADMT rules require privacy risk assessments, cybersecurity audit readiness and consumer-facing transparency obligations to be operationalised together. Without coordinated oversight, these obligations often become siloed between legal, IT, HR and security teams. The DPO function bridges these silos by embedding privacy-by-design principles into AI systems, vendor management processes, employment tools and technical controls.
Equally important is the DPO’s role in maintaining evidence-based accountability. As regulators increasingly expect documentation of risk assessments, consent logic, rights handling, and automated decision-making controls, the DPO function ensures governance is not only implemented but demonstrable. This includes guiding Data Protection Impact Assessments (DPIAs), overseeing ADMT inventories, validating consent and opt-out workflows, and ensuring compliance milestones are met across phased regulatory timelines.
Finally, the DPO serves as the organisation’s interface with regulators, helping to translate enforcement trends into practical internal action and ensuring the organisation can respond credibly to audits, investigations or information requests.
At HewardMills, we act as an external, independent DPO for organisations navigating the complexity of U.S. state privacy and AI regulations. Operating as an extension of internal teams, we provide objective oversight, practical regulatory insight, and ongoing support to help translate evolving legal obligations into operationally effective governance. Ultimately, helping organisations to strengthen accountability frameworks, manage risk proactively, and engage confidently with regulators.
Whether you’re looking to enhance compliance readiness, mature your governance model, or reduce your organisation’s regulatory exposure, we’re ready to support you. Get in touch today.