Let's be direct about what happened here. Gordon Rees Scully Mansukhani, one of the largest law firms in the United States with over 1,000 attorneys across dozens of offices, filed a legal brief containing case citations that do not exist. The cases were fabricated by an AI tool. The citations looked real. The holdings sounded plausible. But the cases were phantoms, conjured by a large language model that does not know (and cannot know) the difference between a real judicial opinion and a convincing fiction.

That would be bad enough on its own. What makes this story extraordinary is that it happened twice.

After the first incident, the firm acknowledged the problem. They pledged to implement safeguards. They said the right things about responsible AI use. And then, not long after, another brief from the firm surfaced with the same problem: AI-hallucinated citations that no one bothered to verify before filing.

The legal internet, predictably, lost its mind. Above the Law ran the story and it went viral. Legal Twitter erupted. But underneath the schadenfreude, there is a serious problem that extends far beyond one firm. This is a systemic failure. And it is going to get worse before it gets better.

The First Time: Embarrassing but Survivable

The first Gordon Rees incident followed a pattern that had become grimly familiar by that point. An attorney at the firm used an AI tool to assist with legal research. The tool generated citations to cases that appeared legitimate. The attorney incorporated those citations into a brief. The brief was filed. Opposing counsel or the court discovered that the cited cases did not exist.

When this first surfaced, Gordon Rees responded with the standard playbook. The firm expressed regret. They emphasized that the incident did not reflect firm policy. They committed to implementing new protocols around AI use. Internal memos were circulated. Training sessions were presumably scheduled.

At the time, many observers were willing to extend some grace. AI tools were relatively new in legal practice. The risks of hallucination were not yet widely understood. Reasonable people could believe that a large firm might have a single attorney who made a bad judgment call, and that institutional safeguards would prevent a recurrence.

That grace period is over.

The Second Time: A Pattern, Not an Accident

When a second brief from Gordon Rees surfaced containing AI-fabricated citations, the narrative changed completely. One incident is a mistake. Two incidents is a pattern. And a pattern suggests something far more troubling than a single rogue attorney: it suggests that the firm's internal controls either do not exist, do not work, or are not being followed.

Think about what had to go wrong for this to happen again. After the first incident, someone at the firm was supposed to have implemented safeguards. Those safeguards were supposed to ensure that every citation in every brief was verified against an actual legal database before filing. Somewhere in that chain, the system failed.

There are only a few possibilities, and none of them are good:

  • The safeguards were never actually implemented. The firm said the right things publicly but never followed through with real policy changes. This would suggest a culture that treats compliance as a public relations exercise rather than a genuine obligation.
  • The safeguards exist but are not enforced. Policies were written and distributed, but individual attorneys are not following them, and no one is checking. This points to a supervision failure that has implications well beyond AI use.
  • The safeguards are inadequate. The firm implemented something, but whatever they put in place is not sufficient to catch AI-generated fabrications. This raises questions about the firm's technical understanding of the tools they are using.

Any of these explanations should concern the firm's clients, its insurers, and the courts where it practices.

The Growing Hall of Shame

Gordon Rees is not the first firm to file AI-hallucinated citations, and they will not be the last. But the pattern is becoming impossible to ignore.

Mata v. Avianca (2023) remains the landmark case. Attorneys Steven Schwartz and Peter LoDuca submitted a brief containing six entirely fabricated case citations generated by ChatGPT. When the court asked about the citations, Schwartz doubled down, submitting additional filings that included fake judicial opinions (also generated by ChatGPT) as "proof" that the cases were real. Judge P. Kevin Castel sanctioned both attorneys $5,000 and required them to notify the judges who had supposedly authored the fabricated opinions. The case became a national story and a permanent stain on both attorneys' careers.

Park v. Kim (2024) extended the pattern to a different context. An attorney in this case submitted AI-generated citations in a state court proceeding. The court was not amused. The case reinforced the message that AI hallucinations in legal filings are not limited to any particular court system, practice area, or level of attorney experience.

And now Gordon Rees. Twice.

What distinguishes the Gordon Rees situation is scale. Schwartz was a solo practitioner. The attorneys in Park v. Kim were at smaller firms. Gordon Rees is an AmLaw 100 firm. They have the resources, the infrastructure, and the institutional capacity to implement real AI governance. If they cannot get this right, the implication is that the problem is not about resources. It is about culture.

Why This Keeps Happening

The technical explanation is straightforward: large language models hallucinate. They generate text that is statistically plausible but factually ungrounded. When you ask an LLM to generate legal citations, it produces strings of text that look like citations because it has seen millions of citations in its training data. Whether those strings correspond to real cases is a question the model cannot answer. (For a deeper technical explanation, see our article on AI hallucinations in the courtroom.)

But the technical explanation only gets you so far. The real question is why attorneys keep filing these fabricated citations without checking them. That is a human problem, not a technology problem.

Over-reliance on AI output. There is a dangerous tendency to treat AI-generated text as presumptively reliable. When a tool produces output that looks professional and reads convincingly, the natural human inclination is to trust it. This is especially true when the attorney is under time pressure, which is always.

No verification culture. Many firms have not built verification into their AI workflows. Using an AI tool and then independently verifying every output against primary sources takes time. It sometimes takes more time than doing the research manually in the first place. If the firm's culture prizes speed and efficiency over accuracy (or if attorneys believe they are being judged on billable output rather than quality), verification gets skipped.

Misunderstanding the technology. A surprising number of attorneys still treat AI research tools as though they were databases. They are not. A legal database like Westlaw or LexisNexis retrieves existing documents. An LLM generates new text. That distinction is fundamental, and failing to understand it leads directly to the kind of errors we are seeing.

Diffusion of responsibility. In a large firm, it is easy for everyone to assume someone else is checking. The associate assumes the partner reviewed the citations. The partner assumes the associate verified them. The result is that no one verifies anything.

Courts Are Done Being Patient

The judicial response to AI hallucinations has evolved rapidly. In 2023, most courts were still figuring out how to handle the issue. By 2024, standing orders began appearing across federal and state courts requiring disclosure of AI use in legal filings. Now, in 2026, the patience has evaporated.

Multiple federal judges have implemented mandatory certification requirements. Attorneys must affirmatively certify that all citations in their filings have been verified against authoritative legal databases. Some courts have gone further, requiring attorneys to identify which portions of their filings were drafted with AI assistance.

The court's obligation to ensure the integrity of proceedings before it is not optional. Attorneys who submit fabricated citations, whether generated by AI or any other means, will face sanctions. The novelty of the technology is no longer a mitigating factor.

The sanctions landscape is shifting too. In Mata v. Avianca, the $5,000 sanctions were widely viewed as lenient. Courts handling subsequent cases have signaled a willingness to impose harsher penalties, including referrals to state bar disciplinary committees. For a second offense by the same firm, the potential consequences are significantly more severe: monetary sanctions, adverse inference instructions, case dismissal, or formal disciplinary proceedings against the responsible attorneys.

Some courts are now treating AI-fabricated citations the same way they treat any other form of fraud on the court. And that framing changes everything. A "mistake" with an AI tool is one thing. Fraud on the court is a career-ending allegation.

What Law Firms Must Do Right Now

The time for vague commitments to "responsible AI use" is over. Firms need concrete, enforceable protocols. Here is what that looks like in practice:

1. Mandatory Verification for Every Citation

Every case cited in every filing must be verified against Westlaw, LexisNexis, or another authoritative legal database before the document is filed. No exceptions. No shortcuts. This sounds obvious, and it is. The fact that it needs to be stated explicitly is itself an indictment of current practice.

2. Written AI Usage Policies with Teeth

Policies need to be specific. They need to identify which AI tools are approved for use, which tasks they may be used for, and what verification steps are required. Critically, they need enforcement mechanisms. A policy that exists only on paper is worse than no policy at all, because it creates a false sense of security.

3. Training That Goes Beyond the Basics

Attorneys need to understand what LLMs actually are and how they work. Not at a PhD level, but enough to know that these tools generate text probabilistically, that they do not "know" anything, and that their output requires independent verification. A one-hour CLE session is not sufficient. This needs to be ongoing, practical, and specific to the tools the firm actually uses.

4. Supervision and Accountability Structures

Someone needs to be responsible for verifying AI-assisted work product. That responsibility needs to be clearly assigned, not assumed. Partners who sign filings need to personally confirm that citations have been verified. The diffusion-of-responsibility problem needs to be addressed head-on.

5. Disclosure Protocols

Firms should adopt proactive disclosure of AI use in filings, even in jurisdictions that do not yet require it. This is both an ethical obligation and a risk management strategy. If a hallucination slips through despite verification efforts, voluntary disclosure of AI use demonstrates good faith and may mitigate sanctions.

The Malpractice Exposure Is Real

Let's talk about what keeps managing partners up at night: malpractice liability.

Filing a brief with fabricated case citations is, at minimum, a failure of competence under the professional rules. Rule 1.1 of the Model Rules of Professional Conduct requires attorneys to provide competent representation, which includes adequate preparation. Submitting AI-generated citations without verification is a clear violation.

Rule 3.3 imposes a duty of candor to the tribunal. Filing fabricated citations, even unknowingly, breaches this duty. And when it happens a second time after the firm was on notice of the risk, the "unknowingly" defense evaporates.

For Gordon Rees specifically, the second incident creates a particularly difficult malpractice exposure. After the first incident, the firm was on actual notice that its AI workflows could produce fabricated citations. Any failure to prevent a recurrence looks, at best, like negligence. At worst, it looks like reckless disregard.

The implications for legal malpractice insurance are significant. Insurers are already beginning to ask pointed questions about AI usage policies during the underwriting process. Firms that cannot demonstrate robust AI governance may face higher premiums, coverage exclusions, or difficulty obtaining coverage at all. A firm with two AI hallucination incidents on its record will face intense scrutiny from its insurer, and rightly so.

How AI Expert Witnesses Fit In

There are two contexts where AI expert witnesses become essential in these situations.

In sanctions and malpractice proceedings. When a court is evaluating the severity of an AI hallucination incident, technical expert testimony can establish what the AI tool was capable of, what its known failure modes were, and whether the attorney's reliance on it was reasonable. An expert can explain the difference between a tool that retrieves existing cases and a tool that generates plausible text. That distinction often determines whether the court views the attorney's conduct as negligent or merely unfortunate.

In designing prevention systems. Firms that are serious about preventing AI hallucinations need more than policies. They need technical assessments of their AI tools, validation protocols tailored to their specific workflows, and ongoing monitoring of hallucination rates. An AI expert can evaluate whether a firm's safeguards are genuinely adequate or merely performative.

I have conducted both types of engagements. The prevention work is more rewarding, because it stops the problem before it starts. But the forensic work after an incident is where the stakes are highest, and where the technical nuances matter most.

The Bigger Picture

Gordon Rees is the story of the week. But the underlying problem is industry-wide.

AI tools are being adopted across the legal profession at a pace that far outstrips the development of governance frameworks. Associates are using ChatGPT for research. Partners are using AI drafting tools for motions. Firms are deploying AI-powered platforms for document review, contract analysis, and case strategy. In many of these applications, the risk of hallucination or error is real, and the verification protocols are inadequate or nonexistent.

The firms that get ahead of this problem will have a competitive advantage. They will be able to use AI tools confidently, knowing that their safeguards are robust. They will avoid the reputational damage and sanctions that come with hallucination incidents. And they will be better positioned to serve clients who are increasingly asking about AI governance as part of their outside counsel selection process.

The firms that do not get ahead of it will learn the lesson the hard way. Some already have. Gordon Rees is learning it for the second time.

There is a simple test for whether your firm is taking this seriously. Ask yourself: if an AI tool generated a fabricated citation today, would your current workflow catch it before filing? If you are not confident the answer is yes, you have work to do. And given what courts are signaling, the time to do that work is now, not after the next incident.

The Criterion AI provides expert witness services, litigation support, and AI governance consulting for law firms and legal departments. If your firm needs help evaluating AI tools, designing verification protocols, or responding to an AI-related incident, contact us at info@thecriterionai.com or call (617) 798-9715.