Safeguarding Privilege in the AI Era: What Lawyers and Clients Must Get Right
Imagine this: A corporate executive preparing a presentation for the board about an ongoing lawsuit decides to use ChatGPT to “make it sound more professional.”
It’s never been easier to give your privileges away
Imagine this: A corporate executive preparing a presentation for the board about an ongoing lawsuit decides to use ChatGPT to “make it sound more professional.” They paste in a paragraph summarizing the facts, a few strategy points, and an update from outside counsel. In another office, the company’s trial lawyer uploads a deposition transcript from the case into an online AI tool that promises quick summarization.
Both interactions feel efficient, modern — even harmless. But in reality, each has potentially disclosed privileged and confidential information to a third party. Neither conversation occurred within the protected zone of attorney–client communication that privilege depends on. Both may be discoverable.
That’s the quiet peril of the new AI environment. Everyone is using tools that look like extensions of our workspace, but function like public microphones.
Privilege was built for envelopes, not algorithms
Attorney–client privilege rests on confidentiality. If information that forms the basis of legal advice is voluntarily shared with outsiders, privilege can be waived. Traditionally, that meant avoiding careless emails, hallway conversations, or forwarding documents too broadly.
Today, the same waiver can occur with a few keystrokes — simply by feeding information into an AI system that stores, analyzes, or transmits it beyond your control.
Public tools like ChatGPT, Gemini, or Claude are, at bottom, third-party services with their own data policies. Users may think their conversations are ephemeral, but most consumer AI tools retain logs, usage metadata, and often the entire text of prompts and responses. Those records can be subpoenaed, accessed by employees, or repurposed for model training. In litigation, opposing counsel could argue that uploading a privileged document to such a system constitutes disclosure outside the privileged relationship.
The danger isn’t just theoretical. Earlier this year, OpenAI was ordered in the New York Times litigation to preserve all prompts and responses entered into ChatGPT. That order did not apply to enterprise “zero-data-retention” (ZDR) systems, but did apply to every regular consumer account, even paid subscriptions. Although the parties have since stipulated to a much narrower retention obligation, for any lawyer or client using ChatGPT.com, the reality is stark: every prompt and every answer saved under court order may become subject to review in future discovery in the case.
The illusion of privacy and the psychology of over-sharing
What makes these tools insidious is how personal and conversational they feel. They emulate a mentor or a colleague — someone safe to talk to. The longer the chat continues, the easier it becomes to cross the line between general inquiry and privileged fact pattern. A user might start with “how do I draft a motion to compel?” and end up describing their specific discovery dispute, including opposing counsel’s name and a privileged litigation strategy.
That’s the creeping disclosure problem: there is no sharp line between casual and confidential once you’re in conversation with a machine that remembers everything you say.
The hidden leaks, or “helpful” software that isn’t
The risk doesn’t end with chatbots. Increasingly, common productivity tools have embedded AI functions that activate automatically. Adobe Acrobat, for example, now offers an AI “assistant” that will summarize documents and answer questions about them. Many users have no idea that enabling that feature sends the entire document to Adobe’s cloud servers for processing, along with your questions and its responses. The same applies to note-taking applications like Otter, Fireflies, Notetaker, and others that record and summarize meetings.
These applications feel benign, offering an easier way to manage workload. But they raise hard questions. Where are those recordings stored? Who can access them? What if an AI summary of a privileged litigation call is later indexed in a third-party database? Once privileged content leaves the controlled environment of lawyer and client, it’s hard to argue it was ever truly confidential.
Why public AI tools have no place in litigation work
For trial lawyers, the temptation to use ChatGPT for research or drafting is understandable. The interface is fast, the prose is fluent, and the results are often helpful. But it’s also a trap.
Even if a lawyer begins with a benign query, the conversation can quickly drift into privileged territory. Moreover, any document uploaded to a non-ZDR system, such as a draft brief, pleading, or client memo, becomes part of that system’s retained data, governed by its terms of service and, in ChatGPT’s case, by an active retention order.
It is difficult to imagine a stronger factual basis for an opposing counsel’s motion to compel than “the producing party uploaded this document to a third-party AI that logs all user data.” It’s the modern equivalent of leaving your client file at a public copy shop. It’s also a violation of the ABA Model Rules of Professional Conduct.
The only responsible stance is categorical: never upload case documents or discuss case details in any non-ZDR environment.
Zero Data Retention (ZDR) is the only real safeguard
ZDR systems are designed precisely to avoid these problems. In a ZDR environment, which is available through enterprise APIs, all prompts and responses are immediately deleted after processing. The data is never stored, logged, or used for training.
That distinction is not cosmetic; it’s legal. When no data is retained, there is nothing for anyone to subpoena, nothing to leak, and no way to reconstruct the conversation. The system functions more like a secure calculator than a correspondent.
For lawyers and clients seeking to preserve privilege while benefiting from AI tools, ZDR is the only defensible standard. Anything less is an unnecessary risk.
Privilege is a partnership
Privilege can’t be protected by lawyers alone. It’s a shared responsibility between counsel and client. Both must be deliberate about how they communicate, what technology they use, and where their information lives.
For clients, this means scrutinizing the tools the organization deploys. If your teams rely on AI to help draft internal communications, reports, or presentations about litigation, confirm exactly where those systems store data. Are they consumer chatbots? Microsoft Copilot integrations? Third-party plug-ins? The more connected your workflow becomes, the more porous your privilege can be. Corporate counsel should establish internal policies prohibiting the use of public AI systems for any content relating to pending or threatened litigation.
For lawyers, the duty is twofold: protect client data in your own workflows, and educate clients about their own exposure. That conversation is now as important as any discussion of billing rates or litigation holds. Clients have the right to expect that their outside counsel understands and controls the technology being used in their representation. Likewise, lawyers have the duty to confirm that their clients aren’t unknowingly undoing those efforts with an over-helpful AI assistant.
Coordinating the representation with shared AI protocols
The best practice emerging among sophisticated corporate legal departments and law firms is to treat AI use as a governance issue — one managed jointly. A shared AI protocol should identify:
which systems are approved (only those with ZDR or equivalent controls),
what kinds of information may be processed (administrative versus substantive), and
how both sides will audit compliance.
In some cases, clients now invite their outside counsel to use the client’s own internal AI system. That approach makes sense when the system is under explicit confidentiality and zero-retention guarantees, and when the lawyer’s use of it falls within the Kovel framework: the AI vendor acts as an agent assisting in the rendition of legal advice, not as an outside recipient of confidential information. United States v. Kovel, 296 F.2d 918 (2d Cir. 1961).
Absent those conditions, the safer structure is for the law firm to maintain its own controlled AI instance and permit client access through that environment. Either way, the objective is the same: keep privileged communications inside a closed loop that both sides control.
Where courts may go next
So far, few courts have ruled directly on AI and privilege, but the analogies are clear. Courts have long held that privilege can extend to translators, consultants, and other agents who are necessary for a lawyer to render legal advice, so long as those agents are under the lawyer’s supervision and bound by confidentiality. That principle provides the legal framework for modern AI systems: a properly contracted and supervised ZDR platform can be treated as an “agent” of the lawyer or client.
By contrast, using a public AI service outside the lawyer’s control looks nothing like Kovel. It looks like a voluntary disclosure to a stranger. No privilege doctrine can save that.
It’s easy to imagine how the first test case will arise: a subpoena to an AI vendor seeking chat logs about a pending case. The party that used a public chatbot will face an uphill battle arguing that those records are privileged. However, the party that used a ZDR system will not have any records to produce at all.
The slow erosion of privilege
Privilege is rarely lost in a single dramatic act. It erodes through small conveniences: clicking “summarize this document,” leaving a transcription bot running during a strategy call, or pasting a draft into ChatGPT for “feedback.” Each act feels minor, even efficient. Together, they can compromise the confidentiality that privilege depends on.
The danger isn’t the technology itself but the casualness with which we now use it. Confidentiality used to require deliberate effort; AI makes disclosure effortless.
The bottom line
AI doesn’t destroy privilege — complacency does.
Protecting privilege in the AI era requires more than good intentions. It requires infrastructure, policy, and discipline. Lawyers must vet and control every tool they use. Clients must demand the same. Both must agree on a shared standard — and that standard should be simple: if it isn’t zero-data-retention, it isn’t safe.
Privilege was once protected by sealed envelopes and private rooms. Today it lives in servers and software, where silence is no longer the default.
Guard it accordingly. Together.
With tools like esumry, deposition transcript analysis is fast and strategic. Tag testimony, assess credibility, and get ahead of how the other side will use the record—before they do.