Grammarly's "Expert Review" feature was marketed as a premium offering for users who needed their writing reviewed by professionals. For $30 per document, Grammarly would provide what its marketing described as an "expert review" of your document, with "detailed feedback from a writing professional." Millions of premium subscribers paid for the service, trusting that a human expert was evaluating their work.

The class action complaint, filed in March 2026, alleges that Expert Review was not performed by human experts. According to the complaint, the "review" was generated entirely by Grammarly's AI system, with no meaningful human involvement. The "detailed feedback" was an AI-generated analysis. The "writing professional" was a large language model. And the $30 per review price point was, the plaintiffs allege, a premium charged for a service that cost Grammarly pennies to deliver.

Grammarly has not yet filed its response, so these are allegations, not proven facts. But the allegations, if substantiated, represent a case study in how AI marketing can cross the line from puffery into fraud.

The Legal Theories

The complaint asserts claims under several theories, each of which carries distinct implications for the broader AI industry.

California Unfair Competition Law (UCL). The UCL prohibits "unfair, deceptive, or fraudulent business practices." The plaintiffs allege that marketing an AI-generated analysis as an "expert review" by a "writing professional" is deceptive because it creates a false impression of human involvement. The UCL has a low threshold for standing (no individual reliance required for the "deceptive" prong) and allows for restitution and injunctive relief. If the court certifies a class, the exposure could be substantial: Grammarly reportedly processed millions of Expert Review requests since the feature launched.

California Consumer Legal Remedies Act (CLRA). The CLRA prohibits deceptive practices in consumer transactions. Unlike the UCL, the CLRA provides for actual damages, punitive damages in cases of knowing violations, and attorney's fees. The complaint alleges that Grammarly knowingly misrepresented the nature of the Expert Review service, which, if proven, would support punitive damages.

Common law fraud. The fraud claim requires the plaintiffs to prove a knowing misrepresentation of material fact, intent to deceive, justifiable reliance, and damages. The key question is whether Grammarly knew or should have known that its marketing created a false impression of human involvement. Internal communications and marketing materials will be critical discovery targets.

Unjust enrichment. The plaintiffs seek disgorgement of the premium charged for Expert Review, on the theory that Grammarly was unjustly enriched by charging human-expert prices for AI-generated output.

The "AI Washing" Problem

The Grammarly case is the most prominent example of what regulators and commentators have begun calling "AI washing in reverse." Traditional AI washing involves companies claiming to use AI when they actually use humans (a practice the FTC has targeted since 2023). Reverse AI washing involves companies claiming human involvement when the work is actually done by AI.

Both practices are deceptive. But reverse AI washing may be more harmful to consumers because it involves paying a premium for something the consumer is not receiving. If you pay $30 for an expert human review and receive an AI-generated analysis that cost the company $0.03 to produce, you have been overcharged by roughly 1,000x.

The AI industry is rife with marketing language that implies capabilities the technology does not actually deliver. Products are described as "intelligent," "smart," "expert," and "professional" without clear disclosure that these are AI systems, not humans. Chatbots are given human names and presented in interfaces designed to simulate human conversation. AI-generated recommendations are described as "personalized advice" without disclosure that no human advisor was involved.

The word "expert" means something. When consumers pay a premium for expert review, they expect a human expert. If the "expert" is an algorithm, that needs to be disclosed. Period.

The Expert Witness Dimension

Cases like the Grammarly class action require technical expert testimony on several questions.

What did the system actually do? An AI expert can examine the architecture and workflow of the Expert Review system to determine the extent of human involvement. Was there a human reviewer in the loop? Was there a human who reviewed the AI's output before it was sent to the customer? Or was the process entirely automated? The answer determines whether the "expert review" marketing was materially misleading.

What is the system worth? If Expert Review was AI-generated, what was the actual cost of providing the service? This requires analysis of the computational cost of generating the review (API calls, inference costs, infrastructure) compared to the cost of a human expert performing the same task. The gap between actual cost and the price charged is relevant to the unjust enrichment claim and to the calculation of damages.

How does the output compare to human expert work? An AI expert can compare the quality, depth, and accuracy of AI-generated reviews to those produced by human writing professionals. If the AI output is substantially inferior to human expert work, that supports the claim that consumers were not receiving what they paid for. If the AI output is comparable to human work, that may mitigate damages but does not eliminate the deception claim: the consumer still did not receive what was promised.

What did Grammarly know and when? Internal documentation, including product development records, A/B testing data, customer feedback, and engineering communications, may reveal whether Grammarly was aware that its marketing created a false impression. An expert can analyze the technical workflow documentation to assess whether the gap between marketing and reality was known to the company.

Implications for the AI Industry

If the Grammarly class action succeeds, or even if it survives a motion to dismiss and reaches discovery, it will send a clear signal to the AI industry about the boundaries of acceptable marketing.

Disclosure requirements will intensify. AI companies will need to clearly disclose the role of AI in their products, particularly when marketing language implies human involvement. "Expert review," "professional analysis," "personalized consultation," and similar phrases will need to be accompanied by clear disclosure that the service is AI-powered if that is the case.

Premium pricing for AI services will face scrutiny. If the court finds that charging human-expert prices for AI-generated output is deceptive, AI companies will need to justify their pricing based on the value delivered, not on the implied cost of human labor. This could compress margins across the AI industry for services that are marketed at premium price points.

The FTC will take notice. The FTC has already identified AI marketing claims as an enforcement priority. A successful class action based on misleading AI marketing would validate the FTC's enforcement posture and likely trigger additional investigations. Companies that have not reviewed their AI marketing claims for compliance with FTC guidance should do so immediately.

Class certification will be watched closely. The plaintiffs are seeking class certification, which would extend the case to all users who purchased Expert Review. If certified, the damages could be enormous. Defense counsel in other AI marketing cases will watch the class certification decision closely, as it will establish whether common questions (was the marketing deceptive?) predominate over individual questions (did each plaintiff rely on the marketing?).

What AI Companies Should Do Now

Audit your marketing language. Review every customer-facing description of your AI product for language that implies human involvement, expertise, or judgment that is not actually present. Replace misleading terms with accurate descriptions of what the AI system actually does.

Disclose AI involvement clearly. If your product uses AI to perform tasks that consumers might expect to be performed by humans, disclose the AI involvement prominently and clearly. "AI-powered" and "AI-assisted" are appropriate labels. "Expert" and "professional" are not, unless human experts are actually involved.

Justify your pricing. If you charge premium prices for AI-generated services, ensure that your pricing is justified by the value delivered, and that your marketing does not create a false impression about the cost structure of the service.

Preserve evidence. If you are an AI company that has marketed products using language that implies human involvement, implement a litigation hold on all marketing materials, product development records, customer feedback, and internal communications about the product's capabilities and marketing. The Grammarly case will not be the last of its kind.

The Grammarly case is the canary in the coal mine. The AI industry has been marketing with imprecise, aspirational language for years. Courts and regulators are now paying attention. The companies that clean up their marketing now will avoid the lawsuits that are coming for those that do not.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.