Here is a scenario that is probably happening right now, somewhere in America. A corporate executive receives a grand jury subpoena. Panicked, they open Claude or ChatGPT and start typing: "I'm being investigated for securities fraud. Here are the facts. What's my best defense?" The AI responds with detailed strategic analysis. The executive prints it out, brings it to their first meeting with counsel, and feels prepared.
After United States v. Heppner, No. 1:25-cr-00483 (S.D.N.Y. Feb. 10, 2026), every word of that conversation is discoverable. The AI is not your lawyer. The AI company's privacy policy means your "confidential" chat was never confidential. And the prosecution just got a roadmap of your defense strategy, handed to them by your own client.
This is not a hypothetical. It is exactly what happened in Heppner.
The Facts: 31 Documents, Zero Privilege
The defendant, a former CEO accused of a $300 million securities fraud scheme, received a grand jury subpoena. Before he was indicted, and critically, before he engaged with his defense attorneys on strategy, he turned to Anthropic's Claude. He generated 31 documents outlining potential defense theories, analyzing the strength of the government's evidence, and mapping out litigation strategies.
He did this on his own. Not at the direction of counsel. Not as part of any attorney-supervised workflow. He was a scared executive trying to figure out how much trouble he was in, and he used the most powerful analytical tool available to him.
Then he shared the outputs with his lawyers, who incorporated the analysis into their defense preparation. When the government sought production of these documents, the defense asserted attorney-client privilege and work product protection.
Judge Jed S. Rakoff, writing for the Southern District of New York, rejected both claims. His written opinion, issued February 17, 2026, systematically dismantled every privilege argument the defense raised.
Three Ways the Privilege Died
Judge Rakoff's analysis identified three independent reasons the attorney-client privilege did not apply. Any one of them would have been sufficient. Together, they paint a comprehensive picture of why consumer AI and legal privilege are fundamentally incompatible.
First: Claude is not an attorney. Attorney-client privilege protects confidential communications between a client and their lawyer, made for the purpose of obtaining legal advice. Claude is not a lawyer. It is not licensed to practice law. It has no professional obligations. The communications between the defendant and Claude were not communications with counsel, period. The fact that the defendant was seeking something that resembled legal advice does not transform the recipient of those communications into an attorney.
Second: Anthropic's privacy policy destroys any expectation of confidentiality. Privilege requires that the communication be made in confidence. But Anthropic's terms of service and privacy policy make clear that user inputs may be used for model training, safety monitoring, and other purposes. The defendant's "confidential" strategy discussions were shared with a company whose own policies disclaim the kind of confidentiality that privilege demands. You cannot have a reasonable expectation of confidentiality when you are typing into a system whose operator explicitly tells you they may access your inputs.
Third: Claude disclaims legal advice. Claude's own outputs include disclaimers stating that it is not providing legal advice and that users should consult with a licensed attorney. The system itself tells you it is not acting as your lawyer. The defendant's reliance on Claude as a strategic advisor does not overcome Claude's own repeated disclaimers that it is not performing that function.
The privilege exists to facilitate candid communication between clients and their attorneys. An AI chatbot, regardless of how sophisticated its outputs, is not an attorney, and communications with it are not privileged.
Work Product? Not Even Close.
The work product doctrine fared no better. Under Hickman v. Taylor, 329 U.S. 495 (1947), work product protection applies to documents and tangible things prepared in anticipation of litigation by or for a party, or by or for that party's representative. The critical phrase: "by or for" a party's representative, meaning counsel.
The defendant generated these documents himself, using a consumer AI tool, without any attorney involvement. His lawyers did not direct him to use Claude. They did not design the prompts. They did not supervise the process. The documents were prepared by a layperson using a machine, not by or at the direction of attorneys.
Judge Rakoff also rejected the Kovel doctrine argument. Under United States v. Kovel, 296 F.2d 918 (2d Cir. 1961), communications with third-party intermediaries can be privileged when the intermediary is "necessary" for the client to communicate effectively with counsel. Think of an interpreter translating for a non-English-speaking client, or an accountant helping explain complex financials to a tax attorney.
The defense argued Claude functioned as this kind of necessary intermediary. Rakoff was unpersuaded. The defendant did not need an AI to communicate with his lawyers. He chose to use one. The Kovel doctrine does not extend to every tool a client finds convenient. It requires necessity, and there was nothing about these communications that required an AI intermediary.
The Split: Warner v. Gilbarco Goes the Other Way
Just as Heppner was making headlines, K&L Gates published an analysis of Warner v. Gilbarco Inc. (E.D. Mich.), a civil case that reached the opposite conclusion on AI-generated legal documents. In Warner, a pro se litigant used AI tools to help prepare filings, and the court found certain protections applied.
The distinction matters. In Warner, the litigant was representing themselves. There was no attorney-client relationship to analyze because the litigant was the attorney (at least in the pro se sense). The court's analysis focused on whether the AI-assisted documents qualified as the litigant's own work product, a fundamentally different question than whether documents generated independently of counsel and then shared with counsel are privileged.
This split is early, but it signals something important: courts are going to reach different conclusions depending on the specific facts. Was the AI used at counsel's direction or independently? Was the user a represented party or pro se? Was the AI tool a consumer product with broad data-sharing policies, or an enterprise tool with strict confidentiality guarantees?
The fact-specificity is the point. As A&O Shearman noted in their client alert, Heppner relies on traditional privilege principles applied to new facts. It does not create new law. It applies old law to the reality that people are now typing their most sensitive legal strategies into consumer chatbots. And old law says: that is not privileged.
Why BigLaw Is Panicking (Quietly)
Dorsey & Whitney published an alert calling Heppner a "game-changer," and they are not wrong. But the real panic is not about what happened in Heppner itself. It is about what happens next.
Consider the current state of AI use in legal practice. Partners use ChatGPT to brainstorm case theories. Associates use Claude to draft research memos. In-house counsel use AI tools to analyze contracts and assess litigation risk. Corporate executives, like the defendant in Heppner, use AI to prepare for conversations with their lawyers.
After Heppner, every one of these use cases carries privilege risk. The question is no longer "Is AI useful for legal work?" It is "Under what conditions does AI use create a discoverable record of privileged thinking?"
The answer, based on Rakoff's analysis, is alarmingly broad. If you use a consumer AI tool (one with standard terms of service that permit data access by the provider), you likely cannot claim confidentiality. If the AI is not operating under attorney supervision as part of a privileged workflow, the outputs are not work product. And if a client generates AI documents independently and shares them with counsel, those documents are fair game.
Practical Takeaways for Every Firm
Segregate consumer AI from privileged workflows. If attorneys use AI for any purpose connected to litigation strategy, it must be through an enterprise tool with contractual confidentiality protections, not through a consumer chatbot. The privacy policy is the privilege killer. Fix the privacy policy problem and you fix the second prong of Rakoff's analysis.
Train your clients. This is the most urgent takeaway. The defendant in Heppner was not a lawyer. He was a client who did what millions of people do every day: asked an AI for help with a problem. Firms need to add AI usage guidance to their client engagement letters. Tell clients, explicitly and in writing, not to discuss case strategy with consumer AI tools.
Document the attorney-directed workflow. If your firm uses AI as part of case preparation, make sure the usage is directed by attorneys, integrated into the attorney work product process, and documented as such. The distinction between "attorney used AI as a research tool" and "client independently generated AI documents" may be the difference between privilege and production.
Audit your prompts. Every prompt is a potentially discoverable document. If your prompts contain privileged information (client names, case strategies, confidential facts), you have already created a privilege problem. Design prompt workflows that are functionally useful without embedding privileged content.
Update litigation hold procedures. Your litigation hold notices need to explicitly cover AI-generated documents: chat logs, exported conversations, saved outputs, and any materials derived from AI interactions. Custodians need to understand that their Claude conversation history is not private.
The Expert Witness Angle
Heppner creates a new category of expert witness demand that did not exist before this ruling. Here is why.
When AI-generated documents become discoverable, both sides need experts who can explain what those documents actually represent. What does it mean that an AI "generated" a defense strategy? How reliable is that analysis? Did the AI's output reflect the user's inputs (and therefore the user's thinking), or did the AI introduce its own analytical frameworks? How do different AI tools handle data retention, and what does the provider's privacy policy actually permit?
These questions require someone who understands both the technology and the legal context. A litigator who has never looked under the hood of an LLM cannot effectively argue about what Claude's outputs mean. A computer scientist who does not understand privilege doctrine cannot explain why the workflow matters.
This is the intersection where AI expert witnesses become essential. In the next wave of privilege disputes (and there will be many), courts will need testimony about how AI tools work, how data flows through them, what confidentiality protections are technically feasible, and whether a particular AI workflow was designed to preserve privilege or inadvertently waived it.
The firms that get ahead of this will be the ones that can demonstrate, through expert testimony if necessary, that their AI workflows were designed with privilege preservation in mind. The firms that cannot will find themselves relitigating Heppner with their own clients' documents on the line.
The Bigger Picture
Before Heppner, AI in legal practice was mostly a story about efficiency. Faster document review. Better research. Cheaper first drafts. The risk conversation focused on hallucinations and accuracy.
After Heppner, AI in legal practice is also a story about exposure. Every prompt is a potential exhibit. Every AI-generated analysis is a potential discovery target. The tool that was supposed to give you an edge might be giving your opponent a window into your strategy.
This does not mean firms should stop using AI. That ship has sailed. It means firms need to use AI the way they use every other powerful tool in litigation: deliberately, with clear protocols, under attorney supervision, and with full awareness of the evidentiary consequences.
Judge Rakoff did not break new ground in Heppner. He applied century-old privilege principles to 2026 technology. The principles held. The technology did not get a special exemption. And that, more than any specific holding, is the lesson every firm needs to internalize.
Your AI knows your legal strategy. The question is whether you have built your workflow so that only your AI knows it, or whether you have accidentally shared it with everyone.
The Criterion AI provides expert witness services and litigation support for matters involving artificial intelligence, machine learning, and algorithmic decision-making. For a confidential consultation on AI privilege workflows, discovery disputes involving AI-generated documents, or expert testimony on AI systems in litigation, contact us at info@thecriterionai.com or call (617) 798-9715.