The Fractured Landscape
There is no federal AI law in the United States. As of February 2026, there is no comprehensive federal statute governing the development, deployment, or use of artificial intelligence. Congress has held hearings, introduced bills, and issued reports. None of it has produced binding legislation. Into this vacuum, the states have stepped — aggressively, unevenly, and with increasing urgency.
The result is a regulatory patchwork that would be comical if the stakes were not so high. A company deploying an AI hiring tool must comply with Illinois's civil rights framework, California's transparency mandates, Colorado's impact assessment requirements, and Texas's risk-based penalty structure — all simultaneously, all with different definitions of key terms, and all with different enforcement mechanisms. And this is just the beginning. At least 78 AI-specific bills are active across 27 state legislatures as of this writing, with more introduced every week.
The Trump administration's December 2025 executive order attempted to impose order on this chaos by proposing federal preemption of "inconsistent" state AI laws. But executive orders are not statutes. Until Congress acts or courts rule, state laws remain fully enforceable. Companies that assume federal preemption will rescue them from state compliance obligations are making a dangerous bet. As we analyzed in our coverage of the EU AI Act's impact on US companies, the trend globally is toward more regulation, not less — and the US state-level landscape is following that same trajectory.
The Federal Context: Preemption Without Legislation
To understand the state-level landscape, you must first understand what the federal government has and has not done. On January 20, 2025, President Trump revoked Biden's Executive Order 14110, which had established the most comprehensive federal AI governance framework to date. EO 14110 had required safety testing for powerful AI models, directed agencies to develop AI standards, and created reporting requirements for frontier AI developers. All of it was undone with a stroke of a pen.
On December 11, 2025, Trump signed a new executive order proposing preemption of state AI laws that are "inconsistent with Federal policy." The order directed the Attorney General to establish an AI Litigation Task Force charged with identifying and challenging state laws that, in the administration's view, impede AI innovation. The order specifically named Colorado's AI Act as an example of "excessive" state regulation.
The legal reality, however, is straightforward. An executive order cannot preempt state law. Federal preemption requires either an act of Congress or a valid federal regulation that conflicts with state law. The Trump administration's EO expresses a policy preference. It does not create binding law. Until Congress passes preemptive legislation or federal courts rule that specific state laws are preempted by existing federal statutes, every state AI law on the books is enforceable.
The AG's AI Litigation Task Force may bring challenges to specific state laws, but those challenges will take years to resolve. In the meantime, compliance is not optional.
Laws Already in Effect: January 1, 2026
Several significant AI laws took effect on January 1, 2026. These are not proposals or pending legislation. They are enforceable law, with real penalties and, in some cases, private rights of action.
California
California leads the nation in AI legislation, with four major laws now in effect.
SB 53 — Transparency in Frontier AI Act. This law requires developers of frontier AI models — defined as models trained using computational resources exceeding a threshold set by the California Department of Technology — to implement risk assessment frameworks, maintain safety protocols, and report serious incidents to the state. The law includes robust whistleblower protections for employees who report safety concerns about AI systems, a provision that intersects directly with the emerging wave of agentic AI liability questions. Penalties reach up to $1 million per violation. Notably, SB 53 applies to developers regardless of where the model is deployed, creating extraterritorial reach that has drawn comparisons to the EU AI Act.
AB 2013 — Training Data Transparency Act. This law requires AI developers to disclose the datasets used to train generative AI models, including the sources of training data, the types of data included, and whether the data contains personal information. The law was designed to address the opacity of training data pipelines and to give individuals and copyright holders visibility into whether their data was used without consent.
SB 243 — Companion Chatbot Safety Act. Perhaps the most novel AI law in the country, SB 243 specifically targets AI chatbot applications that form ongoing conversational relationships with users. The law requires suicide prevention protocols integrated into chatbot responses, enhanced protections for minors including age verification mechanisms, and — in a provision that attracted significant media attention — mandatory break reminders after three continuous hours of interaction. The law was drafted in response to several high-profile cases involving teenagers who developed harmful parasocial relationships with AI chatbots.
AB 489 — Healthcare AI Disclosure Act. This law requires healthcare providers to disclose to patients when AI systems are used in clinical decision-making, including diagnosis, treatment recommendations, and insurance determinations. The disclosure must be made before the AI-assisted decision is finalized, giving patients the opportunity to request human review.
Illinois
HB 3773 — Amendment to the Illinois Human Rights Act. This is arguably the most consequential AI employment law in the country. HB 3773 amends the Illinois Human Rights Act to explicitly define the use of AI in employment decisions — hiring, promotion, termination, and compensation — as a potential civil rights violation. The law creates a private right of action, meaning individuals who believe they were subjected to AI-driven employment discrimination can sue directly without waiting for a government agency to investigate. This is a significant departure from most other state AI laws, which rely on government enforcement. For companies using AI in hiring, this law transforms the algorithmic hiring discrimination landscape from a regulatory compliance question into a direct litigation risk.
Texas
HB 149 — Texas Responsible AI Governance Act (TRAIGA). Texas took a characteristically business-friendly approach to AI regulation. TRAIGA defines "restricted AI purposes" — including employment decisions, healthcare determinations, and criminal justice applications — and requires deployers of AI systems used for restricted purposes to conduct risk assessments and maintain documentation. The law provides an affirmative defense for companies that comply with the NIST AI Risk Management Framework, creating a clear safe harbor for organizations that invest in structured AI governance. Penalties range from $10,000 to $200,000 per violation, enforced exclusively by the state attorney general. There is no private right of action, a deliberate contrast to the Illinois approach.
Nevada
Political Advertising Synthetic Media Disclosures. Nevada now requires clear and conspicuous disclosure when synthetic media — including deepfakes and AI-generated content — is used in political advertising. The law applies to any political advertisement distributed within the state, regardless of where it was produced. Violations carry civil penalties and can be referred for criminal prosecution in cases of willful deception.
Montana
Right of Publicity for AI-Generated Likenesses. Montana expanded its right of publicity statute to explicitly cover AI-generated likenesses. The law prohibits the use of a person's name, image, or voice in AI-generated content without their consent, with enhanced protections for deceased individuals whose likenesses may be commercially exploited through AI tools.
Consumer Privacy States
Rhode Island, Nebraska, Indiana, and Kentucky all enacted comprehensive consumer privacy laws that include specific provisions for automated decision-making. These laws grant consumers the right to opt out of profiling and automated decisions that produce legal or similarly significant effects. While these are primarily privacy statutes rather than AI-specific laws, their automated decision-making provisions directly regulate many common AI applications, including credit scoring, insurance underwriting, and targeted advertising.
Coming Mid-2026: The Big Two
Two laws taking effect in mid-2026 will reshape the AI compliance landscape more dramatically than anything currently in force.
Colorado AI Act (SB 24-205) — Effective June 30, 2026
The Colorado AI Act is the first comprehensive state AI statute in the United States. Unlike the more targeted laws described above, Colorado's law attempts to create a complete regulatory framework for "high-risk" AI systems — systems that make or substantially contribute to consequential decisions about consumers in areas including employment, education, financial services, healthcare, housing, insurance, and legal services.
The law requires deployers of high-risk AI systems to conduct algorithmic impact assessments before deployment and annually thereafter. These assessments must evaluate the system's purpose, intended benefits, potential risks of algorithmic discrimination, the data used to train the system, and the metrics used to evaluate performance. Deployers must also provide consumers with clear disclosures about AI involvement in decisions that affect them, including the right to appeal AI-assisted decisions to a human reviewer.
Developers of high-risk AI systems face obligations as well, including providing deployers with sufficient documentation to conduct impact assessments, disclosing known limitations and risks, and reporting instances of algorithmic discrimination.
Penalties reach $20,000 per violation, enforced by the Colorado Attorney General. The law was originally scheduled to take effect on February 1, 2026, but was delayed to June 30 following lobbying by the technology industry and pressure from the Trump administration, which specifically named Colorado's law in its December 2025 executive order as an example of regulation that "goes beyond what is necessary."
Organizations deploying AI systems in Colorado — or affecting Colorado consumers — must begin impact assessments now to meet the June 30 deadline.
California AI Transparency Act (SB 942) — Effective August 2, 2026
SB 942 requires providers of generative AI systems to implement content provenance mechanisms, including watermarks and latent disclosures embedded in AI-generated content. The law also requires developers to provide publicly accessible detection tools that allow users to determine whether content was generated by AI. This law will have enormous practical implications for media companies, content platforms, and any organization that uses AI to generate text, images, audio, or video.
The 2026 Bill Tsunami
Beyond the laws already enacted, state legislatures across the country are considering an unprecedented volume of AI-specific legislation. Several themes dominate the 2026 legislative session.
Chatbot Regulation: The Theme of 2026
Companion chatbot regulation has become the defining legislative theme of 2026, driven by a series of tragic incidents involving minors and AI chatbots. At least five states have active chatbot-specific bills:
- Virginia SB 796 — the AI Chatbots and Minors Act — would prohibit AI chatbot companies from deploying systems designed for ongoing conversational relationships with users under 18 without parental consent and built-in safety guardrails.
- Washington SB 5984 — modeled on California's SB 243, targeting companion chatbots with mandatory break reminders, emotional dependency warnings, and suicide prevention integration.
- Utah HB 438 — would require AI chatbot providers to disclose their AI nature within the first interaction and prohibit chatbots from encouraging self-harm, illegal activity, or sexual content with minors.
- Arizona HB 2311 — would create a "digital duty of care" for AI chatbot providers, establishing a negligence standard for foreseeable harms caused by chatbot interactions.
- Hawaii SB 3001 — would require age verification for AI chatbot access and mandate content filtering for minors, with civil penalties up to $50,000 per violation.
Employment AI
Following Illinois's lead, several states are moving to regulate AI in employment decisions. New York A 10251 would require employers to conduct bias audits of AI hiring tools before deployment, building on the framework established by New York City's Local Law 144. Rhode Island H 7767 would create a private right of action for employees subjected to AI-driven employment decisions. California SB 947 would extend the state's existing fair employment protections to explicitly cover AI-assisted decisions, creating DFEH enforcement authority over algorithmic hiring discrimination.
Algorithmic Pricing
A newer category of AI regulation targets algorithmic pricing systems. Rhode Island H 7764 would prohibit landlords from using algorithmic pricing tools — specifically targeting platforms like RealPage — to set rental prices, on the theory that such tools facilitate tacit price-fixing. Utah SB 293 and Colorado HB 1210 would require transparency disclosures when algorithmic pricing tools are used in consumer-facing markets, including rental housing, insurance, and retail.
Healthcare AI
The Tennessee Senate passed a bill banning AI systems from serving as autonomous mental health professionals, responding to the proliferation of AI therapy chatbots that operate without clinical oversight. The bill would prohibit AI systems from providing diagnostic assessments, treatment plans, or therapeutic interventions without the direct supervision of a licensed mental health professional.
Transparency and Provenance
Washington HB 1170, modeled on California's SB 942, would require AI content provenance mechanisms including watermarking and detection tools. Utah HB 276 would require AI-generated content used in government communications to be clearly labeled.
The Federal Preemption Battle
The tension between the Trump administration's pro-innovation posture and the states' regulatory activism is reaching a breaking point. The most visible flashpoint is Utah HB 286, a proposed AI governance bill that drew a formal letter from the White House stating that the bill "goes against the Administration's AI Agenda." This was the first time the administration publicly intervened in a state legislative process on AI — and it will not be the last.
The legal arguments for federal preemption are not frivolous. The Commerce Clause gives Congress broad authority to regulate interstate commerce, and AI systems that operate across state lines are quintessentially interstate. But Congress has not exercised that authority. The executive branch cannot preempt state law by fiat. And the Supreme Court has consistently held that state regulatory authority is presumed valid in the absence of clear congressional intent to preempt.
Some companies are already challenging state AI laws in court. These challenges will take years to resolve. The practical reality for the foreseeable future is multi-state compliance. Organizations that wait for federal preemption to simplify their obligations are likely to find themselves in violation of laws that remain fully enforceable while the courts deliberate.
The Heppner ruling on AI privilege illustrates a related dynamic: courts are applying existing legal frameworks to AI in ways that create immediate, practical consequences for organizations. The regulatory and judicial landscape is moving simultaneously, and neither is waiting for the other.
What This Means for Your Organization
If your organization develops, deploys, or uses AI systems, the following steps are not optional. They are the minimum required to maintain legal compliance in the current environment.
1. Inventory all AI systems. You cannot comply with laws you do not know apply to you. Conduct a comprehensive inventory of every AI system your organization uses, including third-party tools, embedded AI features in enterprise software, and internally developed models. Document the purpose, data inputs, decision outputs, and affected populations for each system.
2. Map systems to state requirements. For each AI system in your inventory, determine which state laws apply based on the system's function, the populations it affects, and the states in which those populations reside. A single AI hiring tool may be subject to Illinois's civil rights framework, Texas's TRAIGA requirements, Colorado's impact assessment mandate, and New York City's bias audit law simultaneously.
3. Conduct impact assessments. Colorado's AI Act requires algorithmic impact assessments by June 30, 2026. Even if your organization does not operate in Colorado, impact assessments represent best practice and will position you favorably under laws that are coming in other states. The NIST AI Risk Management Framework provides a solid foundation for these assessments and doubles as a TRAIGA affirmative defense in Texas.
4. Implement disclosure mechanisms. Multiple states now require consumer-facing disclosures about AI involvement in decisions. Design disclosure mechanisms that are clear, timely, and actionable — meaning they give affected individuals the opportunity to request human review or opt out where applicable.
5. Train staff on new obligations. Compliance is not just a legal department function. Product managers, engineers, HR professionals, and customer-facing staff all need to understand the AI compliance obligations that affect their work. Training should be role-specific and updated as new laws take effect.
6. Monitor preemption litigation. Track the AG's AI Litigation Task Force activities and any federal court challenges to state AI laws. The preemption landscape could shift rapidly if a court issues an injunction against a major state law. But until that happens, compliance remains mandatory.
7. Do not assume federal preemption will save you. This bears repeating. The most dangerous compliance posture is waiting for the federal government to solve the problem. The current administration has expressed a preference for lighter regulation, but preferences are not law. State laws are law. Comply with them.
The Expert Witness Angle
Every law described in this article creates potential litigation. Consider the vectors:
Illinois's private right of action for AI employment discrimination will generate a wave of individual and class action lawsuits. Plaintiffs will need technical experts who can analyze AI hiring systems for discriminatory outcomes. Defendants will need experts who can demonstrate that their systems comply with applicable standards.
Colorado's impact assessment requirement will create disputes about the adequacy of assessments. When a company's AI system causes harm and the impact assessment failed to identify the risk, the quality of the assessment becomes a central litigation issue — one that requires expert testimony to evaluate.
California's whistleblower protections under SB 53 will lead to retaliation claims by AI safety researchers and engineers. These cases will require experts who can evaluate whether the safety concerns the whistleblower raised were legitimate and whether the company's AI practices were consistent with industry standards.
The preemption battles themselves will require expert testimony about the technical feasibility of multi-state compliance, the actual burden of state AI regulations on innovation, and the differences and similarities between state regulatory approaches.
And as agentic AI systems become more autonomous and more widely deployed, the liability questions will multiply. Every autonomous action taken by an AI system in a regulated domain is a potential compliance violation, a potential tort, and a potential expert witness engagement.
The demand for qualified AI expert witnesses — professionals who can bridge the gap between technical AI systems and legal requirements — is about to explode. We are seeing it already in our practice, and the pace is accelerating.
Conclusion: Bookmark This Page
This guide will be updated monthly as new laws take effect, new bills advance, and the preemption litigation develops. The AI regulatory landscape is shifting weekly. What is a bill today may be law next month. What is enforceable today may be enjoined by a federal court next quarter. The only certainty is that the pace of change will not slow down.
Bookmark this page. Share it with your compliance team, your legal counsel, and your product managers. The organizations that navigate this landscape successfully will be those that treat compliance not as a burden but as a competitive advantage — because the alternative, as the penalty structures described above make clear, is far more expensive.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.