The Ruling, in 60 Seconds

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled from the bench that 31 documents Bradley Heppner created using Anthropic's consumer AI chatbot Claude were protected by neither attorney-client privilege nor the work product doctrine. United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 10, 2026). Heppner, a CEO facing securities and wire fraud charges, had used Claude on his own initiative (not at counsel's direction) to organize his thoughts, analyze his legal exposure, and generate reports outlining defense strategy. He then shared those AI-generated documents with his lawyers at Quinn Emanuel.

Judge Rakoff was blunt: "I'm not seeing remotely any basis for any claim of attorney-client privilege." The written opinion, issued February 17, called the question one of "nationwide" first impression and laid out three reasons the privilege failed. Claude is not a lawyer. Claude's privacy policy destroys any expectation of confidentiality. And Heppner did not use Claude "for the purpose of obtaining legal advice." On work product, the court found the documents were not prepared by or at the direction of counsel.

We covered the ruling in depth in our original analysis. What happened next was just as important: every major law firm in the country rushed to publish its take. Within two weeks, the client alert ecosystem was saturated. Here is what they said.

Gibson Dunn: Traditional Doctrine, Not an AI-Specific Rule

Gibson Dunn's February 20 client alert is the most comprehensive analysis in the bunch. Their AI, Privacy, Cybersecurity, and White Collar teams co-authored the piece, and the framing is instructive. Gibson Dunn emphasizes that Judge Rakoff "did not announce a rule uniquely targeting AI technologies." Instead, he "applied traditional attorney-client privilege and work product legal principles to the conduct at issue."

This is not a small point. Gibson Dunn is telling its clients: do not panic about an anti-AI rule. Panic (they would never use that word, of course) about architecture. The problem in Heppner was not that the defendant used AI. The problem was that he used a consumer AI tool with a privacy policy that expressly permitted data collection, model training, and disclosure to third parties. That privacy policy destroyed any reasonable expectation of confidentiality.

The practical implication Gibson Dunn highlights is significant. Users of "publicly accessible or open AI platforms may assume that privilege attaches to their inputs and the resulting AI outputs." That assumption is wrong. If a platform's terms of service permit disclosure to third parties, the confidentiality element of privilege evaporates on contact.

Gibson Dunn's implicit recommendation: enterprise AI deployments with contractual confidentiality protections stand on fundamentally different footing than consumer tools. The ruling is about the architecture of the relationship, not the technology itself.

Jones Day: Keep It Short, Keep It Cautious

Jones Day's analysis is characteristically brief. They lay out the facts, summarize the holding, and close with a single sentence of guidance: "litigants should exercise caution when using AI in legal contexts, and ensure consultation with counsel when considering whether to do so."

That brevity is itself a signal. Jones Day is not telling clients that the sky is falling. They are telling clients to talk to their lawyers before using AI in anything touching litigation. The alert emphasizes the three-part failure: no attorney-client communication (Claude is not a lawyer), no confidentiality (the privacy policy killed it), and no legal advice purpose (Heppner acted on his own, not at counsel's direction).

One subtle point Jones Day highlights: the court distinguished between Heppner's intent to discuss AI outputs with counsel and the question of whether he used the AI tool "for the purpose of obtaining legal advice." Those are different things. You can plan to show your lawyer a document without that document being created for the purpose of getting legal advice. The distinction matters.

Morgan Lewis: Consumer vs. Enterprise Is the Whole Ballgame

Morgan Lewis's alert is the most strategically valuable of the group because it does something the others largely don't: it draws a bright line between consumer AI tools and enterprise AI deployments. Their key takeaway is blunt: "Early decisions suggest courts will find that using consumer AI tools may waive the attorney-client privilege, while enterprise AI tools used at the direction of counsel offer more protection."

This framing is essential for in-house legal departments. Morgan Lewis is saying: the type of AI tool you use, and the contractual terms governing that tool, may determine whether your privilege survives. A consumer version of Claude with a standard privacy policy is one thing. An enterprise deployment with negotiated confidentiality provisions, data isolation, and no model training on inputs is something entirely different.

Morgan Lewis also highlights an important gap in the ruling. Judge Rakoff "did not address the situation where an attorney directs a client to use an AI tool as part of the legal representation." That is a critical open question. If counsel had instructed Heppner to use Claude, the work product analysis might have come out differently. The court explicitly developed this point by questioning defense counsel about their lack of involvement in creating the documents.

Perhaps most importantly, Morgan Lewis flags the tension between Heppner and existing work product case law. The court ruled narrowly on the absence of "attorney" work product but did not engage with the broader principle that materials created by a party in anticipation of litigation can qualify for protection under Rule 26(b)(3). That tension will be tested.

A&O Shearman: The Investigations Lens

A&O Shearman's analysis comes from their Investigations practice group, and that lens shapes their reading. They focus on the discovery risks created "when using AI outside the specific guidance of an attorney." The emphasis on attorney direction is deliberate. A&O Shearman is drawing a practical bright line for clients involved in government investigations: if your employees or executives are using AI tools to analyze legal exposure, and they are doing so without attorney supervision, those outputs may be fair game.

Their recitation of the government's arguments is worth noting. The government argued that Heppner's communications with Claude "could not have been created for the purpose of obtaining legal advice, even if later transmitted to counsel for that purpose, in part because Anthropic's policies explicitly disclaim its use for the purpose of gaining legal advice." That is a clever argument. The AI tool itself says it does not provide legal advice. You cannot claim you used it for that purpose when the tool's own terms say otherwise.

A&O Shearman closes by noting that the application of privilege and work product to AI-generated documents "will surely raise" further questions in future cases. Translation: this is round one, and we are watching closely.

Debevoise: First on the Scene

Debevoise published their analysis on February 11, the day after Judge Rakoff's bench ruling, before the written opinion was even issued. Their Data Blog team moved fast, and the early timing shaped their framing. Debevoise focuses on the practical implications for "protecting client communications that involve the use of AI tools."

Their factual summary adds a detail that other firms gloss over: Heppner's counsel at Quinn Emanuel conceded that he "prepared the AI documents on his own initiative, not at his counsel's direction." That concession was devastating. It foreclosed the strongest work product argument (that the documents were prepared at the behest of counsel) and left defense counsel arguing the weaker position that a client's own litigation preparation should be protected regardless of attorney involvement.

Debevoise also emphasizes the no-retroactive-cloaking principle. "The subsequent act of transmitting these AI-generated documents to counsel does not create a shield of attorney-client privilege." You cannot launder unprivileged documents by sharing them with your lawyer after the fact.

Alston & Bird: The Corporate Governance Angle

Alston & Bird's analysis comes from their Privacy, Cyber & Data Strategy team, and they focus on a dimension that should terrify general counsel everywhere: Heppner input privileged information he had received from his attorneys into a consumer AI tool. Even though that information was originally privileged, the act of disclosing it to Claude (a third party with no confidentiality obligations) likely waived the privilege over the underlying communications too.

Read that again. It is not just that the AI outputs lost privilege. By feeding privileged attorney-client communications into a consumer AI tool, the client may have waived privilege over the original communications themselves. Alston & Bird frames this as directly relevant to "companies working to manage internal uses of AI tools, and also to operations of corporate Legal departments."

This is the corporate nightmare scenario. An employee receives privileged legal advice from in-house counsel. The employee pastes that advice into ChatGPT or Claude to help draft a response or analyze a situation. Under Heppner's logic, the employee just waived privilege over both the AI output and potentially the original legal advice. Every AI acceptable use policy in America needs to address this.

McDermott Will & Emery: The Practical Playbook

McDermott's analysis is the most practically oriented. They frame Heppner as "best understood as a technology-neutral decision applying longstanding privilege principles to a new context," and then get to work on what to actually do about it.

McDermott organizes the ruling around "three familiar pillars of privilege doctrine": no attorney-client communication, lack of confidentiality and waiver, and no retroactive cloaking. Their contribution is making each pillar actionable. On confidentiality, they note that Judge Rakoff "emphasized that the defendant had disclosed it to a third party, in effect, AI, under terms that did not guarantee confidentiality." The analogy McDermott draws is clarifying: "If a client forwards a privileged email to a friend, posts it on a public platform, or shares it with an unprotected third-party service, privilege is typically waived. The fact that the third party here was an AI platform does not alter the analysis."

DLA Piper and Others: The Chorus Grows

DLA Piper, Chapman and Cutler, Harris Beach Murtha, Leech Tishman, Maynard Nexsen, and Jones Walker have all published their own analyses. The details vary, but the themes converge.

DLA Piper highlights a nuance in Judge Rakoff's reasoning: even the "purpose" element of privilege was a closer call than the first two elements, but the court still found it wanting. Heppner may have been planning to share his findings with counsel, but he "did not do so at the suggestion or direction of counsel." For privilege purposes, intent to later discuss is not the same as creating a document for the purpose of obtaining legal advice.

Chapman and Cutler emphasizes the implications for "any non-lawyer who uses artificial intelligence tools to research or analyze legal matters." That is a wide net. It captures not just criminal defendants but corporate employees, compliance officers, and anyone else who might use AI to think through legal questions before talking to a lawyer.

Where the Firms Agree

The consensus is striking. Despite differences in emphasis, every major firm agrees on these points:

This is not a new rule. Judge Rakoff applied traditional privilege doctrine. AI did not change the law. It just created a new way to trip over existing requirements.

Consumer AI tools are the problem. The privacy policies of consumer AI platforms (allowing data collection, model training, and third-party disclosure) are fundamentally incompatible with maintaining the confidentiality required for privilege.

You cannot retroactively create privilege. Sharing an unprivileged document with your lawyer does not make it privileged. This principle is as old as the privilege itself, but Heppner applies it to a new context.

Attorney direction matters. Had counsel directed Heppner to use the AI tool, the work product analysis might have been different. Every firm notes this gap in the ruling.

Where They Disagree (or at Least Diverge)

The disagreements are more subtle, but they are real.

How broad is the ruling? Gibson Dunn and McDermott read it narrowly: this is about consumer AI with bad privacy policies, not about AI generally. Morgan Lewis goes further, explicitly distinguishing enterprise from consumer deployments and suggesting enterprise tools "offer more protection." Others, like Alston & Bird, read the ruling more broadly as a warning about any AI use that involves privileged information.

What about party work product? Morgan Lewis flags that Judge Rakoff ruled on the absence of "attorney" work product but did not fully engage with the question of whether a party's own materials, created in anticipation of litigation, might qualify for protection under Rule 26(b)(3). Other firms do not press this point. This is the most likely avenue for future litigation to test the ruling.

Can enterprise AI save you? Most firms imply that enterprise deployments with contractual confidentiality protections would produce a different result. But none of them say it definitively. The court in Heppner did not address enterprise AI, and no firm wants to promise a result the court has not reached.

What Every Litigation Department Should Do Now

Synthesizing the guidance across all of these alerts, here is the practical consensus.

Ban consumer AI for anything touching litigation or legal advice. If your employees are using ChatGPT, Claude, Gemini, or any other consumer AI tool to analyze legal exposure, draft legal memoranda, or prepare for conversations with counsel, stop. Today. The privacy policies of these tools will destroy your privilege claims.

Deploy enterprise AI with negotiated confidentiality terms. If you want to use AI in legal workflows, use enterprise-grade deployments with contractual provisions that guarantee confidentiality, prohibit model training on your inputs, and restrict data sharing. This is where Morgan Lewis's consumer-versus-enterprise distinction becomes operationally critical.

Ensure attorney direction and supervision. Every firm agrees that attorney direction matters. If AI tools are used in anticipation of litigation, they should be used at the direction of counsel and under counsel's supervision. This preserves the strongest work product arguments and keeps the AI tool within the attorney-client relationship as a functional equivalent of a non-lawyer assistant.

Update your AI acceptable use policies. Alston & Bird's analysis makes this urgent. If employees can paste privileged legal advice into consumer AI tools, they can waive privilege over the original communications. Your AI policy needs to explicitly prohibit inputting privileged or confidential legal communications into any AI tool that lacks enterprise-grade confidentiality protections.

Revise litigation hold procedures. AI-generated documents must now be explicitly addressed in litigation hold notices. Custodians need to understand that AI outputs, including summaries, analyses, categorizations, and strategy documents, must be preserved alongside the underlying materials. If those outputs are not privileged, they are discoverable, and failing to preserve them creates spoliation risk.

Segregate AI outputs from privileged communications. Do not comingle AI-generated analyses with attorney-client communications in the same files, folders, or platforms. When attorneys develop strategy based on AI analysis, the attorney's own memoranda reflecting their legal judgment should be documented separately from the AI's outputs.

Review AI vendor agreements. If your AI tools process data on remote servers, review the vendor's privacy policy, terms of service, and data processing agreements. Look specifically for provisions regarding data retention, model training, and third-party disclosure. If those provisions are inconsistent with maintaining confidentiality, your privilege is at risk before you type a single prompt.

How This Ruling Will Be Tested

Heppner is a district court opinion from the Southern District of New York. It is not binding outside that district, and several of its holdings will face challenges in other courts and circuits.

The enterprise AI question. The first case involving an enterprise AI deployment with negotiated confidentiality provisions will test the most important distinction in the ruling. If a company uses a tool that contractually guarantees confidentiality, prohibits model training, and restricts data access, the confidentiality element of privilege should survive. But until a court says so, it remains an open question.

Attorney-directed AI use. When an attorney specifically directs a client (or a paralegal, or an associate) to use an AI tool to prepare materials in anticipation of litigation, the work product analysis should come out differently. The "at the behest of counsel" requirement would be satisfied. This is the clearest path to distinguishing Heppner.

Party work product under Rule 26(b)(3). Morgan Lewis correctly identifies the tension between Heppner and the plain language of Rule 26(b)(3), which protects materials prepared "by or for" a party in anticipation of litigation. Heppner prepared those documents in anticipation of his criminal trial. A future court might find that the work product doctrine, properly applied, does protect a party's own AI-generated litigation preparation materials, even without attorney direction.

Other circuits. Circuits that take a broader view of work product protection, or that have not yet addressed AI-related privilege questions, may reach different conclusions. The Second Circuit has not yet weighed in on Heppner itself. And courts in the Ninth Circuit, the D.C. Circuit, and state courts applying different privilege standards may develop their own frameworks.

The appeal. Heppner's defense team from Quinn Emanuel may seek interlocutory review of the privilege ruling. Given that Judge Rakoff himself characterized the question as one of "nationwide" first impression, a strong argument exists that the issue warrants appellate review before trial. Watch for a petition for mandamus or a certified interlocutory appeal under 28 U.S.C. Section 1292(b).

The Bottom Line

The legal profession has reached a rare moment of consensus. Across the ideological and strategic spectrum of BigLaw, the message is the same: Heppner is not an anti-AI ruling. It is an anti-carelessness ruling. The firms that treat AI like any other third-party tool, with attention to confidentiality obligations, attorney supervision, and proper documentation practices, will be fine. The firms that let employees paste privileged information into consumer chatbots without guardrails will learn this lesson the expensive way.

Judge Rakoff did not break new legal ground. He applied old rules to a new tool. But the practical consequences are profound, because so many lawyers and clients had been operating under the unstated assumption that AI conversations were somehow different. They are not. Privilege requires confidentiality. Consumer AI tools do not provide it. Everything else flows from there.

The real question is not whether Heppner was correctly decided. It is whether the legal profession will adapt its AI workflows before the next ruling forces the issue. Based on what BigLaw is publishing, the answer is: they are trying. Whether their clients are listening is another matter entirely.

The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on an active or anticipated matter, contact us at info@thecriterionai.com or call (617) 798-9715.