On February 18, 2026, a three-judge panel of the U.S. Court of Appeals for the Fifth Circuit did something that has become disturbingly routine. It sanctioned a lawyer for filing a brief full of fake case citations generated by artificial intelligence. The fine was $2,500. The language in the opinion was far more costly.
Chief Judge Jennifer Walker Elrod, joined by Judges Jerry Smith and Cory Wilson, wrote that AI-hallucinated citations "have increasingly become an even greater problem in our courts." The problem, the court said, "shows no sign of abating."
That phrase should alarm every attorney in the country. Not because $2,500 is a devastating sum. But because it signals that federal appellate courts have moved past surprise, past disappointment, and into something closer to institutional exhaustion.
What Happened in Fletcher v. Experian
The underlying case, Fletcher v. Experian Information Solutions, Inc., involved a Fair Credit Reporting Act claim against a lender and a credit reporting agency. The district court in Texas had sanctioned plaintiff's counsel, Shawn Jaffer, and his firm (then Jaffer & Associates) $23,000 in attorneys' fees, concluding he had not conducted even a minimal investigation before filing suit.
Jaffer appealed. Attorney Heather Hersh of FCRA Attorneys filed the reply brief on appeal. That is where things went sideways.
The Fifth Circuit identified 21 instances of fabricated quotations or serious misrepresentations of law or fact in the brief. Sixteen were outright fabricated quotations. Five were additional serious misrepresentations. The court issued a show-cause order asking Hersh to explain herself.
Her initial response made things worse. She claimed she had "relied on publicly available versions of the cases" and pointed to well-known legal databases as the source of the errors. Judge Elrod called this explanation "not credible" and "misleading in several respects." Only when the court specifically asked whether she had used AI did Hersh admit to using generative AI to "help organize and structure" her argument.
"Had Hersh accepted responsibility and been more forthcoming, it is likely that the court would have imposed lesser sanctions. However, when confronted with a serious ethical misstep, Hersh misled, evaded, and violated her duties as an officer of this court."
The court sanctioned her $2,500 under its inherent power and Federal Rule of Appellate Procedure 46(c), which addresses conduct unbecoming a member of the bar. In an ironic twist, the Fifth Circuit actually vacated the underlying sanctions order against Jaffer. Hersh won the appeal but earned her own sanctions in the process.
The Growing Timeline of AI Sanctions
To understand why the Fifth Circuit's frustration is boiling over, you need to see the timeline. This is not an isolated incident. It is a pattern that has been accelerating for nearly three years.
June 2023: Mata v. Avianca (S.D.N.Y.). The case that started it all. Attorneys Steven Schwartz and Peter LoDuca submitted a brief containing six entirely fabricated case citations generated by ChatGPT. Judge P. Kevin Castel sanctioned them $5,000. The story made international headlines and became the cautionary tale that should have ended this problem.
2024: Park v. Kim (E.D.N.Y.). Another attorney sanctioned for relying on AI-generated legal research without verification. The case demonstrated that Mata's lesson was not sinking in as broadly as the profession needed.
2025-2026: Gordon Rees Scully Mansukhani. One of the largest law firms in the country found itself entangled in AI-related sanctions proceedings. When a firm of that size and reputation gets caught, the message is clear: this is not a problem limited to solo practitioners or small shops cutting corners.
February 2026: Kansas Patent Case. Just two weeks before the Fifth Circuit opinion, a Kansas federal judge fined five attorneys a combined $12,000 for filing documents with non-existent quotations and case citations generated by AI. Five lawyers. One filing. Twelve thousand dollars.
February 2026: Fletcher v. Experian (5th Cir.). And now this. The first time the Fifth Circuit has sanctioned a lawyer for AI hallucinations. The opinion cited a database maintained by French lawyer and data scientist Damien Charlotin that, as of the date of the opinion, listed 239 documented cases of AI-generated hallucinations in filings submitted by lawyers in the United States.
Two hundred and thirty-nine. And those are just the ones that got caught.
Why Courts Are Running Out of Patience
The Fifth Circuit's opinion was notable for what it did not do: offer sympathy.
In the early cases, courts treated AI hallucination incidents with a degree of understanding. The technology was new. Lawyers did not fully appreciate the risks. There was a learning curve. That grace period is over.
Judge Elrod noted that the Fifth Circuit had considered adopting a first-of-its-kind appellate rule regulating the use of generative AI by lawyers but opted against it in 2024, concluding that existing rules were sufficient. The opinion makes clear that this decision was not an invitation to be careless. It was a statement of trust that the bar could govern itself.
That trust is eroding. As Judge Elrod wrote, "If it were ever an excuse to plead ignorance of the risks of using generative AI to draft a brief," that time has long since passed. Every lawyer in America has had access to the same headlines, the same CLE programs, the same bar association guidance. Ignorance is no longer a defense. It is an aggravating factor.
The opinion also offered three pieces of practical guidance that read less like suggestions and more like warnings:
Use the right tool for the job. General-purpose LLMs like ChatGPT are not legal research platforms. Specialized tools like Westlaw's AI-assisted research product limit hallucinations by grounding responses in verified case databases. Using a general chatbot for legal research, the court suggested, is like using a screwdriver to drive a nail.
Do not ignore red flags. When an LLM cites the same case repeatedly for multiple propositions, that is a warning sign. When a citation seems "too good to be true," with a quote that is amazingly on point, it probably is too good to be true.
Admit the obvious. When caught, come clean immediately. Hersh's evasiveness turned what might have been a lighter sanction into a $2,500 fine and a published opinion bearing her name.
Understanding the Three Flavors of Punishment
Not all sanctions are created equal, and the legal mechanisms courts use to punish AI-related misconduct carry different implications for the attorneys involved.
Rule 11 Sanctions (Federal) and State Equivalents. Rule 11 of the Federal Rules of Civil Procedure requires attorneys to certify that their filings are supported by existing law or a good-faith argument for its modification. Filing a brief full of fabricated citations is a textbook Rule 11 violation. These sanctions typically involve monetary penalties and can include requiring the attorney to pay the opposing party's fees incurred in responding to the defective filing.
Inherent Power Sanctions. This is what the Fifth Circuit used in Fletcher. Courts have inherent authority to sanction attorneys for conduct that abuses the judicial process. Inherent power sanctions can be more flexible, and more severe, than rule-based sanctions. They do not require a motion from the opposing party. The court can act on its own. When a federal appeals court invokes its inherent power to sanction you, that is the judicial system telling you it considers your conduct a threat to the integrity of the process itself.
Bar Discipline. This is the nuclear option. State bar associations can investigate attorneys for ethical violations arising from AI misuse, potentially leading to suspension or disbarment. While most AI hallucination cases have not yet resulted in bar discipline, the trajectory is clear. As the number of incidents grows and courts make their frustration more explicit, bar associations will feel pressure to act. The Model Rules of Professional Conduct require competence (Rule 1.1), which increasingly means competence in understanding the tools you use.
The Fifth Circuit specifically cited Rule 46(c), addressing "conduct unbecoming a member of the bar." That is language that bridges courtroom sanctions and professional discipline. It is a signal that the wall between the two is thinning.
The Legal Malpractice Insurance Problem
Here is the part that should keep managing partners up at night. Legal malpractice insurers are paying attention to this trend, and they do not like what they see.
Every AI hallucination sanctions case creates a potential malpractice claim. The client whose brief contained fabricated citations has a straightforward argument: my lawyer's negligent use of AI technology harmed my case. Even in Fletcher, where the appeal was ultimately successful, the sanctions themselves represent a harm that did not need to happen.
Insurers are beginning to ask pointed questions on renewal applications about AI usage policies. Firms without documented protocols for AI verification, without training programs, without clear guidelines on which AI tools are approved for which tasks, will face higher premiums. Some may face coverage exclusions.
The calculus is simple. A firm that allows attorneys to use unvetted AI tools without verification protocols is a firm that is manufacturing malpractice risk. Insurers price risk. This risk is about to get expensive.
Prevention: What Actually Works
The good news is that AI hallucination sanctions are entirely preventable. Every single case in the growing database of 239 incidents could have been avoided with basic verification protocols. Here is what firms need to implement now.
Verification Protocols
Every citation generated or suggested by AI must be independently verified against an authoritative legal database. Not spot-checked. Not sampled. Every single one. This is not optional. It is the bare minimum standard of care.
The verification process should be documented. When an attorney files a brief that relied on AI assistance, the file should contain a record showing that each citation was pulled up, read, and confirmed to say what the brief claims it says. If you cannot prove you verified, you might as well not have verified at all.
AI Disclosure Requirements
A growing number of courts now require attorneys to disclose whether AI was used in preparing filings. These standing orders vary in scope. Some require disclosure of any AI use. Others focus specifically on generative AI used for legal research or drafting. The list of courts with such requirements grows monthly.
Smart firms are not waiting for mandates. They are adopting voluntary disclosure practices and building them into their filing checklists. Voluntary disclosure creates a paper trail of good faith. When something does go wrong (and statistically, it eventually will), demonstrating that you had a disclosure policy in place, and followed it, matters enormously.
Firm-Wide Policies
Individual attorney discipline is not enough. Firms need institution-level policies that address:
- Approved tools: Which AI platforms are sanctioned for use, and for which tasks? A general-purpose chatbot might be fine for brainstorming but prohibited for citation research.
- Training requirements: Every attorney who uses AI tools should complete training on hallucination risks, verification procedures, and the firm's specific policies.
- Supervision protocols: Junior associates are the most likely users of AI tools and the least likely to have developed the judgment to spot hallucinated content. Senior attorneys must review AI-assisted work product with the same rigor they would apply to any other delegated task.
- Incident reporting: When a hallucination is caught during internal review (which is the system working correctly), it should be documented and used as a training opportunity. Not punished. Not hidden. Documented.
Standing Orders: The Growing Patchwork
Federal and state courts across the country are adopting standing orders that address AI use in legal filings. The approaches vary widely, creating a patchwork that attorneys practicing in multiple jurisdictions must navigate carefully.
Some courts require a simple certification that AI-generated content has been verified for accuracy. Others require detailed disclosure of which AI tools were used and how. A few prohibit the use of generative AI for legal research entirely, though this approach is increasingly seen as impractical.
The Fifth Circuit's decision not to adopt a specific AI rule in 2024, followed by this sanctions opinion in 2026, sends a particular message: we trust that existing ethical obligations are sufficient, but we will enforce them aggressively when lawyers fail to meet them. For practitioners before the Fifth Circuit, the takeaway is that the absence of a specific AI rule does not mean the absence of consequences.
Attorneys should maintain a current list of standing orders in every jurisdiction where they practice. Several legal technology organizations and bar associations now maintain databases of these orders. Checking before filing is not just good practice. It is a professional obligation.
The Role of AI Expert Witnesses
As sanctions for AI hallucinations become more common, a new category of expert testimony is emerging. AI expert witnesses are being retained on both sides of these disputes.
For attorneys facing sanctions: An AI expert can explain the technical mechanisms behind hallucination, demonstrate that the attorney's reliance on AI output was consistent with (or fell below) the standard of care for AI-assisted legal research, and provide context about hallucination rates in specific models and configurations. This testimony can be the difference between a finding of negligence and a finding of reasonable reliance on a tool that failed in a non-obvious way.
For courts and opposing parties: An AI expert can evaluate the specific outputs in question, determine whether they exhibit patterns consistent with AI generation (as opposed to human error), assess whether the sanctioned attorney's firm had adequate AI governance policies, and quantify how readily the fabrications could have been detected with reasonable verification efforts.
For malpractice claims: When a client sues over AI-generated errors in their case, expert testimony on both the technical and professional-responsibility dimensions is essential. The expert must bridge the gap between how the technology works and what the standard of care requires, translating token prediction and hallucination rates into language that a jury can understand and apply.
This is still a nascent field. The pool of qualified experts who understand both the technology and the legal context is small. But as the caseload grows, so will the demand.
Where This Is Heading
Let me be direct about the trajectory. The sanctions are going to get worse.
In 2023, Mata v. Avianca was shocking. In 2024, similar cases were disappointing. By early 2026, with 239 documented incidents and counting, courts are angry. The Fifth Circuit's "shows no sign of abating" language is not an observation. It is a warning shot.
The next phase will likely involve larger monetary sanctions, referrals to state bar disciplinary authorities, and, eventually, malpractice verdicts that dwarf any sanction amount. The first seven-figure malpractice judgment arising from AI-hallucinated legal work is not a matter of if. It is a matter of when.
For attorneys: the tools are powerful. Use them. But verify everything. Disclose when required (and even when not). Build systems that catch errors before they reach the court. The technology is not going away, and neither is the responsibility to use it competently.
For firms: this is a governance problem, not an individual attorney problem. The firms that build robust AI policies now will avoid sanctions, reduce malpractice exposure, and, frankly, deliver better work product. The firms that do not will find themselves in the growing database of cautionary tales.
For clients: ask your lawyers about their AI policies. Ask what tools they use, how they verify outputs, and whether they have training programs in place. You have a right to know. Your case depends on it.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, including AI hallucination sanctions defense and prosecution. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.