Stop Your Clients from Chatting with AI about Their Case

 
 
 

In February 2026, a federal court in New York delivered a decision that every litigator—civil or criminal—should treat as an immediate client-counseling trigger.

 

In February 2026, a federal court in New York delivered a decision that every litigator—civil or criminal—should treat as an immediate client-counseling trigger. In United States v. Heppner, the court ruled that a defendant’s materials “submitted … to AI” were not protected by attorney–client privilege and not protected as work product.

That may sound like a niche evidentiary ruling. It isn’t. It’s a warning shot about a fast-growing behavior: litigants using LLMs (Large Language Models) such as ChatGPT, Claude, Gemini and others as a substitute strategist or rehearsal room for a dispute. When clients do that, it can create a clean, timestamped record of their thinking—often including admissions, intent, “best arguments,” and narrative edits—that falls outside the privilege umbrella.

From a practical litigation perspective, those AI chats can operate like a jailhouse snitch: the client believes they’re speaking privately, but the content is not privileged, and it can end up telling the other side exactly what it wants to know.

 

The Heppner moment that every litigator needs to heed

The week before the written ruling, the court’s comments at the hearing distilled the problem with unusual clarity. Judge Rakoff went straight to first principles, rejecting any claim of attorney-client or work product privilege:

THE COURT: Well, first of all, I don't see any basis for a claim of attorney-client privilege. Did you want to say anything further on that?

COUNSEL: I think what those reports reflect – and I'm happy to submit the reports for your review in camera.

THE COURT: No. I'm not saying, remotely, any basis for any claim of attorney-client privilege.

*  *  *

THE COURT: The core purpose of Work Product Doctrine is to protect the mental strategies of counsel in anticipation of litigation. How did that relate to this? This was not something that reflected your strategy, as I understand what you're saying.

COUNSEL: No, I think it did affect our strategy, your Honor.

THE COURT: No. Did it reflect your strategy.

COUNSEL: No. As we acknowledge, these were prepared by the defendant on his own volition.

*  *  *

THE COURT: All right. The government's motion is granted.

 

That dialogue is the entire cautionary tale in miniature. Courts are not struggling with some exotic “AI doctrine.” They are applying familiar doctrines to a new communications channel, and asking familiar questions: Was the communication privileged? Was it confidential? Was it attorney work product, i.e., counsel’s strategy?

When a client independently “talks” to an LLM about their case, the safest assumption is that the answer to those protection questions is: no.

 

Privilege isn’t a feeling: your client is talking to a third party

Most clients experience LLMs as intimate and private. They type in sensitive facts. They test out versions of events. They ask questions that they would never ask a human being. They get language back that sounds confident and professional. The interaction feels like a confidential consultation.

But privilege is not about how it feels. It’s about whether it meets the legal requirements of confidentiality and purpose, and whether it remains protected from disclosure to third parties. The moment a litigant discusses matter-related facts, motivations, or strategy with a non-lawyer third party, the privilege analysis is essentially over.

The hearing transcript underscores the point. Early in the discussion, the court said flatly that it saw no basis for attorney–client privilege.

The facts documented another reality that will arise repeatedly in these disputes: many AI tools include terms telling users they have no expectation of privacy in inputs, giving courts another reason confidentiality is missing.

Even if a client doesn’t read the terms, those terms still matter when the client later tries to argue the chat was “confidential.” And even if the terms were perfect, the basic practical problem remains: your client has created a document-like record of case-related statements outside the attorney-client channel.

 

Why client-side LLM use is the “jailhouse snitch” of modern litigation

Calling it a “jailhouse snitch” isn’t just a dramatic metaphor. It describes how these records behave in real disputes.

A jailhouse snitch isn’t dangerous because it is always accurate. It’s dangerous because it is usable. It creates a narrative. It can be quoted. It can be introduced through a witness. It can be used to impeach. It can be used to show intent or knowledge. And it can be used even when it is messy—sometimes especially when it is messy.

Client LLM chats share those characteristics, but can be even more harmful:

  • They’re written and timestamped.

  • They often contain the client’s most candid version of events.

  • They can capture evolving stories (“Here’s what happened… actually, maybe it was more like…”).

  • They preserve motive/intent language (“How do I avoid getting caught?” “Is this illegal?” “How bad is this?”).

  • They can embed or summarize attorney advice (“My lawyer said X; how should I respond?”).

And unlike a hallway conversation, these chats are easy to preserve and reproduce. In some cases they are stored by the platform; in others, they’re on the client’s devices; often they’re both. That means they can surface through any of the usual mechanisms: device imaging, account exports, backups, cooperating witnesses, or third-party process.

 

“But Heppner was criminal—civil discovery is different!”

Criminal procedure is different from civil discovery. But the risk logic is the same:

  1. In civil cases, parties can seek relevant, non-privileged ESI (electronically stored information) through document requests, interrogatories, depositions, and subpoenas.

  2. If the client’s LLM chats are not privileged, they are simply another source of potentially discoverable ESI, much like texts, DMs, or social media.

  3. If the chats contain admissions, inconsistencies, or intent evidence, they become unusually valuable to the other side.

A useful analogy is social media discovery. Over the past 15+ years, lawyers have sought to teach clients—sometimes successfully—that posting about the dispute can destroy the case. We now need a similar effort aimed at LLMs: don’t create new, non-privileged records about the dispute.

AI chats may be worse than social media because social media is performative. AI chats are often confessional.

 

Work product privilege won’t reliably bar production

The hearing exchange quoted above is also a roadmap for why “work product” will not reliably protect client-generated AI material.

Work product doctrine is designed to protect counsel’s litigation preparation, particularly mental impressions and strategy. The court emphasized that “core purpose” in plain language.

When the document reflects the client’s own thinking, created at the client’s own initiative, and routed through an LLM platform, courts may view it as exactly what the government argued in Heppner: “Mr. Heppner’s own actions” rather than attorney strategy.

That doesn’t mean a work-product argument is impossible in every scenario. But the default posture for LLM chats by a client should be: assume they are unprotected and you will lose the fight if you litigate it. And even where you might win an argument, the mere existence of the chats creates cost, motion practice, risk of waiver findings, and the possibility of partial disclosure.

 

The coming LLM provider headache: subpoenas and third-party discovery pressure

There’s another aspect here that is easy to overlook until it arrives in the form of a subpoena: LLM providers will become third-party discovery targets.

If the opposing party believes a litigant used an AI platform to discuss the dispute, they may pursue the content from the litigant (devices, exports, screenshots) and/or from the provider (account records, preserved logs, metadata, chat content where retained).

Provider discovery will bring the same headaches we’ve seen with telecoms and social platforms—scope fights, privacy arguments, burden objections, protective orders—but with the added volatility that users often misunderstand what is retained and for how long.

This pressure is not hypothetical. In high-profile copyright litigation involving major publishers and OpenAI, discovery disputes have publicly highlighted the tension between producing chat-related information and protecting privacy and minimizing burden. The details of those disputes will vary case to case, but the practical takeaway for litigators is that, once AI chats exist, they can generate collateral discovery battles that distract from the merits and increase risk.

 

The advice that should be given in every engagement, early and in writing

Heppner should prompt a straightforward policy update to clients: treat your use of LLMs like texting or posting about the case with third-parties. In other words, don’t just mention it casually. Put it in writing, right away, and repeat it when the case becomes stressful.

Clients should be told, clearly:

  • Do not use LLM tools to discuss facts, strategy, exposure, settlement positions, witness issues, or damages related to the matter.

  • Do not paste attorney advice, draft filings, demand letters, expert analyses, or privileged communications into an LLM.

  • Assume anything typed into our output by an LLM about the matter could be obtained and used in the case.

The point isn’t to demonize the tools. The point is to prevent clients from unintentionally building the other side’s case.

 

The inquiry that matters just as much as the advice: find out what already exists

A warning is only half the job. The other half is identifying whether the client has already created the problem.

Many clients will have used an LLM before they ever call a lawyer—because it’s fast, free (or low cost), and feels nonjudgmental. Others will use it during representation because they are anxious and want instant answers at 2:00 a.m. Some will do it because they don’t want to “bother” (or pay) their lawyer. Those are understandable human impulses. They are also litigation landmines.

So, at intake and early in the matter, add a direct question:

“Have you used any AI chatbot to discuss this dispute?”

If the answer is yes, the next step is not moralizing. It is risk management. You need to request the full dialogue (prompts and outputs), determine whether it must be preserved, evaluate whether it contains harmful admissions or inconsistencies, and consider how it affects discovery strategy.

Importantly, you also want to stop ongoing creation. A client who has already used an LLM is more likely to keep using it unless you explain, in plain terms, why it’s dangerous.

 

How to explain it to clients in one sentence

“If you ask an AI chatbot about your case, assume it will be read back to you in a deposition or trial.”

That line avoids debating retention policies, privacy settings, or whether the model “trains” on the data. Those details matter in some contexts, but they are not the core litigation risk. The core risk is that the client has created a non-privileged record containing case-related statements.

A practical checklist

  1. Update engagement letters / initial client advisories to include an “AI tools” warning.

  2. Add an intake question about prior AI chatbot use related to the dispute.

  3. Request and preserve existing AI dialogues that relate to the matter (prompts + outputs).

  4. Update litigation hold language to explicitly reference AI chat histories, exports, synced devices, and screenshots.

  5. Repeat the warning mid-case when stress rises and clients are most likely to freelancing.

 

Heppner isn’t about AI, it’s about evidence

The most important thing to understand about Heppner is that it is not “an AI case.” It is an evidence case. It applies familiar privilege and work product principles to a new place where people talk.

And because clients increasingly talk to LLMs the way they talk to counsel—about risk, strategy, facts, and how to frame events—those conversations will increasingly show up in litigation. When they do, courts are likely to ask the same questions the Heppner court asked: does this reflect counsel’s strategy, or is it the litigant’s own creation, shared with a third party?

If you want to avoid the modern “jailhouse snitch” scenario—where your client’s chatbot becomes the other side’s most helpful witness—the solution is simple and immediate: make sure your client does not chat with AI about their case, and find out up front if they already have.

This post is for informational purposes only and does not constitute legal advice.

 

About the Author

James Chapman is a co-founder of esumry and a defense litigator. He writes about the intersection of AI, litigation strategy, and legal operations.

 

Using esumry, privilege is protected with ZDR (zero data retention), and case analysis is fast, strategic, and secure. Create timelines, tag testimony, assess credibility, and get ahead of how the other side will use the record—before they do.

 
 
 

Want to learn more about the efficiency and accuracy of esumry?

We invite you to see firsthand how AI and automation will revolutionize your litigation defense.

Schedule a demo of esumry today and start transforming your litigation defense.

 

 
James Chapman