The FTC's AI policy statement is not legislation. It is not a regulation promulgated through notice-and-comment rulemaking. It is a statement of enforcement priorities, issued by a 3-2 vote of the Commission, that tells the AI industry what the FTC considers unlawful under its existing authority. And while policy statements do not have the force of law, they have the force of practice: they signal what the FTC will investigate, what it will prosecute, and what theories of liability it will advance in federal court.

For AI companies, this policy statement is the closest thing to a federal AI enforcement playbook that exists. Here is what it says, what it means, and what you should do about it.

The Four Pillars of FTC AI Enforcement

The policy statement is organized around four enforcement priorities, each grounded in the FTC's existing authority under Section 5 of the FTC Act.

Pillar 1: Deceptive AI Claims

The FTC has signaled since 2023 that it views false or exaggerated AI claims as deceptive under Section 5. The new policy statement formalizes this position and provides specific examples of claims the FTC considers deceptive:

Performance claims without substantiation. If you claim your AI system achieves a specific accuracy rate, reduces costs by a specific percentage, or outperforms human decision-making, you must have competent and reliable evidence to support the claim before you make it. Post-hoc testing is not sufficient. The evidence must exist at the time the claim is made.

AI washing. Claiming that a product or service uses AI when it does not, or exaggerating the role of AI in a product's functionality, is deceptive. The policy statement specifically calls out companies that label manual processes as "AI-powered" to justify premium pricing or to attract investment. It also addresses reverse AI washing: claiming human involvement when the work is actually performed by AI.

Safety claims without basis. Claiming that an AI system is "safe," "trustworthy," or "bias-free" without adequate testing and evidence to support those claims is deceptive. The policy statement notes that no AI system is entirely free of bias or error, and absolute safety claims are presumptively misleading.

Implied endorsements. Using terms like "expert," "professional," "doctor-recommended," or "clinically validated" in connection with AI systems that have not actually been endorsed by experts, used by professionals, or validated in clinical settings is deceptive.

Pillar 2: Unfair AI Practices

Section 5's unfairness prong prohibits practices that cause substantial consumer injury that is not reasonably avoidable by consumers and not outweighed by countervailing benefits. The policy statement identifies several categories of AI practices that the FTC considers presumptively unfair:

Deploying AI with known discriminatory outcomes. If your AI system produces outcomes that disproportionately harm consumers based on race, gender, age, disability, or other protected characteristics, and you know about the disparity and have not taken adequate steps to mitigate it, the FTC considers that an unfair practice. The policy statement does not require proof of discriminatory intent; disparate impact is sufficient.

Using AI to exploit vulnerable populations. AI systems that target elderly consumers, children, or other vulnerable populations for predatory marketing, manipulative pricing, or deceptive practices are presumptively unfair. The policy statement specifically mentions dark patterns implemented through AI personalization systems.

Making consequential decisions without adequate testing. Deploying AI systems that make decisions about consumers' access to credit, employment, housing, insurance, or healthcare without adequate pre-deployment testing for accuracy and fairness is an unfair practice. The policy statement does not specify a testing standard, but it references NIST's AI Risk Management Framework as a benchmark.

Pillar 3: Data Practices

The policy statement extends the FTC's longstanding data practice enforcement to AI-specific contexts:

Training data collection. Collecting consumer data for AI training purposes without adequate notice and consent is an unfair or deceptive practice. The policy statement takes the position that broad, general-purpose consent (such as a general terms-of-service provision) is not sufficient for AI training use if the consumer would not reasonably expect their data to be used for that purpose.

Inference data. AI systems generate inferences about consumers (predicted preferences, creditworthiness assessments, health risk scores) that may be as sensitive as the input data. The policy statement takes the position that inferences generated by AI systems about consumers are subject to the same notice and fairness requirements as directly collected data.

Model disgorgement. The policy statement reaffirms the FTC's authority to require "algorithmic disgorgement": the deletion of AI models trained on improperly collected data. This remedy, first used in the Everalbum case in 2021, is now explicitly part of the FTC's AI enforcement toolkit.

Pillar 4: Competition

The policy statement addresses AI-specific competition concerns:

Algorithmic collusion. AI pricing systems that facilitate tacit collusion among competitors, even without explicit agreement, may violate the FTC Act. The policy statement cites recent economic research showing that AI pricing algorithms can independently converge on supra-competitive prices without human coordination.

AI-enabled market concentration. The FTC will scrutinize mergers and acquisitions in the AI industry for their effects on competition, with particular attention to acquisitions of AI startups by dominant platform companies, exclusive arrangements for access to training data, and vertical integration that forecloses competitors from essential AI infrastructure.

The FTC's message is clear: we do not need new legislation to enforce against harmful AI practices. Section 5 gives us the tools. This policy statement is the instruction manual.

What the Policy Statement Does Not Say

As significant as the policy statement is, it is important to understand its limitations.

No safe harbors. The policy statement does not establish safe harbors or compliance certifications. Following NIST's AI Risk Management Framework or obtaining a third-party audit does not immunize a company from FTC enforcement. The FTC reserves the right to bring enforcement actions based on its own assessment of whether a practice is deceptive or unfair, regardless of the company's compliance efforts.

No specific testing standards. While the policy statement requires "adequate" pre-deployment testing, it does not specify what testing methodologies, benchmarks, or accuracy thresholds satisfy this requirement. This creates uncertainty for companies trying to comply in good faith.

No private right of action. The FTC Act does not provide a private right of action. Consumers and competitors cannot sue under the FTC Act directly. However, the policy statement's articulation of what the FTC considers deceptive or unfair will influence state attorneys general, who do have enforcement authority under state consumer protection laws, and private plaintiffs, who can cite the FTC's position as evidence of industry standards in common law claims.

Practical Guidance for Corporate Counsel

Conduct a marketing audit immediately. The most likely near-term enforcement actions will target deceptive AI claims. Review every public statement your company has made about its AI capabilities, including website copy, investor presentations, marketing materials, and press releases. Flag any claim that lacks pre-existing substantiation.

Implement pre-deployment testing. Before deploying any AI system that makes decisions about consumers, conduct and document testing for accuracy, fairness, and potential discriminatory impact. The testing documentation should be sufficient to demonstrate to the FTC, if asked, that the company took reasonable steps to ensure the system performs as claimed.

Review data practices. Audit your AI training data pipeline for compliance with the FTC's notice and consent requirements. If your training data includes consumer data collected under general terms-of-service provisions, assess whether those provisions would satisfy the FTC's heightened expectations for AI training use.

Prepare for investigations. The FTC investigates before it prosecutes. A civil investigative demand (CID) from the FTC is expensive to respond to and disruptive to operations. Companies that have organized their compliance documentation, designated a point of contact for regulatory inquiries, and retained outside counsel with FTC experience will handle investigations more efficiently and with better outcomes.

Monitor enforcement actions. The policy statement is a roadmap. The enforcement actions that follow will be the case law. Track every FTC AI enforcement action for guidance on how the Commission interprets and applies the principles in the policy statement.

The FTC has drawn its lines. The AI industry can choose to operate within them or litigate the boundaries. But no company can claim ignorance. The playbook is public. The enforcement is coming. The only question is who goes first.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.