In January 2024, exactly two states had enacted AI-specific legislation. By March 2026, that number has grown to nineteen, with active bills pending in another fourteen. The Federal Trade Commission has issued its first comprehensive AI policy statement under Section 5 of the FTC Act. The EU AI Act is now in its phased enforcement period, with extraterritorial reach that affects every US company with European customers. And Congress, despite multiple attempts, has not passed a single piece of comprehensive federal AI legislation, leaving the field to a growing patchwork of state laws with conflicting requirements, overlapping jurisdictions, and no clear hierarchy.
If you are a compliance officer, general counsel, or outside advisor to an AI company, this is your operating environment. Here is your map.
The Big Three: California, Colorado, and Illinois
California SB 53 is the most consequential state AI law enacted to date. Signed into law in September 2025 and effective January 1, 2026, it imposes mandatory safety evaluations on developers of frontier AI models (defined as models that required more than 10^26 FLOP to train or that meet specified capability thresholds). Key requirements include pre-deployment safety testing, a mandatory "kill switch" capability for deployed models, annual third-party audits, incident reporting to the California Attorney General within 72 hours of discovering a critical safety failure, and whistleblower protections for employees who report safety concerns. The penalties are severe: up to $10 million per violation for knowing violations, with the Attorney General authorized to seek injunctive relief including mandatory recall of deployed models.
Colorado's AI Act (SB 24-205), effective February 1, 2026, takes a different approach. Rather than targeting frontier model developers, Colorado focuses on "deployers" of "high-risk AI systems," meaning any AI system used to make consequential decisions about consumers in employment, education, financial services, healthcare, housing, insurance, or government services. Deployers must conduct impact assessments before deployment, provide notice to consumers that AI is being used in consequential decisions, offer an appeal process that includes human review, and maintain records of the impact assessments and their findings for at least five years. The Colorado Attorney General has exclusive enforcement authority; there is no private right of action.
Illinois has taken a sector-specific approach, building on its pioneering Biometric Information Privacy Act (BIPA) and its AI Video Interview Act. In 2025, Illinois enacted the AI Employment Decisions Act, which requires employers using AI in hiring, promotion, or termination decisions to provide written notice to affected individuals, conduct disparate impact analyses, and maintain records for audit purposes. The Illinois approach is notable because it includes a private right of action with statutory damages, making it the most plaintiff-friendly AI statute in the country.
The Emerging Tier: Texas, New York, Connecticut, Virginia, and Washington
Texas TRAIGA (Texas Responsible AI Governance Act), enacted in 2025, focuses on transparency. AI systems that interact with Texas consumers must disclose that the consumer is interacting with AI, identify the entity that deployed the system, and provide a mechanism for the consumer to opt out of AI-mediated interactions where feasible. Texas notably chose not to create a private right of action, leaving enforcement to the Attorney General.
New York has taken a piecemeal approach. Local Law 144, effective since 2023, regulates automated employment decision tools in New York City. But the state legislature has introduced (and so far failed to pass) a comprehensive AI bill modeled on the EU AI Act. As of March 2026, the bill has passed the state Senate and is pending in the Assembly. If enacted, it would be the most comprehensive state AI law in the country, with a risk-based classification system, mandatory conformity assessments, and a private right of action.
Connecticut enacted its AI Bill of Rights in 2025, which provides consumers with rights to explanation, correction, and human review when AI systems are used in consequential decisions. Washington State's AI Fairness Act focuses specifically on algorithmic discrimination in housing and employment, creating a cause of action under the state's existing civil rights framework. Virginia's AI Consumer Protection Act mirrors elements of Colorado's law but adds a unique provision requiring "algorithmic impact reports" to be filed with the state Attorney General and made publicly available in redacted form.
The FTC: Federal Enforcement Without Federal Legislation
In the absence of comprehensive federal AI legislation, the FTC has stepped into the regulatory vacuum. Its March 2026 AI Policy Statement, issued under the Commission's existing authority under Section 5 of the FTC Act (which prohibits unfair or deceptive acts or practices), establishes the following enforcement framework:
Deceptive AI claims. Companies that make false or misleading claims about AI capabilities, including claims about accuracy, fairness, privacy protection, or human oversight, are subject to enforcement under the FTC Act's deception prong. The FTC specifically calls out "AI washing" (claiming that a product uses AI when it does not, or exaggerating the role of AI in a product's functionality) as a priority enforcement area.
Unfair AI practices. AI practices that cause substantial consumer injury that consumers cannot reasonably avoid and that is not outweighed by countervailing benefits are unfair under the FTC Act. The policy statement identifies several categories of presumptively unfair AI practices: deploying AI systems with known discriminatory outcomes without adequate mitigation, using AI to target vulnerable populations for predatory marketing, and deploying AI systems that make consequential decisions about consumers without adequate testing.
Data practices. The FTC has long regulated data practices under Section 5. The AI policy statement extends this authority to AI training data, stating that collection of consumer data for AI training purposes is subject to the same notice, consent, and fairness requirements as other data collection practices. Companies that trained models on consumer data without adequate notice or consent face retroactive enforcement risk.
Nineteen states. The FTC. The EU AI Act. And no federal preemption in sight. This is not a compliance challenge. This is a compliance crisis.
The EU AI Act: Why US Companies Cannot Ignore It
The EU AI Act entered its phased enforcement period in 2025, with the first substantive requirements taking effect in August 2025 (prohibited practices) and the full framework becoming enforceable by August 2026. Its extraterritorial reach means that any AI system whose outputs are used in the EU, regardless of where the system was developed or where the provider is located, is subject to the Act's requirements.
For US AI companies, this creates a compliance obligation that cannot be avoided through corporate structuring or geographic limitation. If your AI system is used by a European customer, you are subject to the EU AI Act. Period. The Act's risk-based classification system, conformity assessment requirements, and transparency obligations represent a baseline that many US companies will find more stringent than any single US state law.
The practical challenge is that complying with the EU AI Act does not guarantee compliance with US state laws, and vice versa. Colorado's impact assessment requirements differ from the EU's conformity assessment requirements. California's safety testing obligations differ from the EU's prohibited practices and high-risk system requirements. A company operating in all three jurisdictions must comply with all three frameworks independently, with no safe harbor for compliance with one serving as compliance with another.
The Federal Preemption Question
The most important legal question in AI regulation today is whether federal legislation will preempt the growing patchwork of state laws. Multiple federal bills have been introduced, including the proposed Federal AI Accountability Act, which would establish a federal framework and expressly preempt conflicting state requirements. None have advanced beyond committee.
The AI industry strongly favors federal preemption, arguing that a patchwork of 50 different state AI laws would be unworkable. Consumer advocates and state officials resist preemption, arguing that federal legislation is likely to be weaker than state laws and that preemption would eliminate important consumer protections. This tension is unlikely to be resolved soon.
For companies, the practical implication is clear: do not wait for federal preemption. Comply with each applicable state law on its own terms. Build compliance infrastructure that is modular enough to adapt as new laws are enacted and existing laws are amended. And budget for the legal costs of monitoring, analyzing, and complying with a regulatory landscape that changes quarterly.
Practical Compliance Steps
Map your jurisdictional exposure. Determine which state laws apply based on where your AI systems are deployed, where your customers are located, and where the decisions your AI systems make have effect. This is not a one-time exercise; it must be updated as new laws are enacted.
Conduct impact assessments. Colorado, Virginia, Connecticut, and the EU AI Act all require some form of algorithmic impact assessment. Rather than conducting separate assessments for each jurisdiction, develop a comprehensive impact assessment methodology that satisfies the most demanding requirements and then tailor the output for each jurisdiction's specific format and filing requirements.
Build a disclosure framework. Nearly every AI law requires some form of consumer disclosure. Develop a disclosure framework that covers the superset of requirements across all applicable jurisdictions: notice that AI is being used, explanation of how the AI system reaches its decisions, information about opt-out and human review rights, and contact information for questions and complaints.
Establish incident response protocols. California's 72-hour incident reporting requirement is the most demanding, but other jurisdictions are likely to adopt similar requirements. Build incident response protocols that can satisfy the fastest reporting deadline, and adapt the response for each jurisdiction's specific requirements.
Retain technical experts. Compliance with AI-specific regulations requires technical expertise that most legal teams do not have in-house. Retain AI technical experts who can assist with impact assessments, safety evaluations, bias testing, and incident investigation. These experts will also be essential if your compliance practices are ever challenged in litigation or regulatory enforcement.
The compliance map is complex and getting more complex every quarter. But the alternative to proactive compliance, reactive response to enforcement actions and litigation, is far more expensive and far more damaging. Start mapping now.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.