Defense Counsel Guidelines for Using AI
When corporate legal departments and insurance organizations manage litigation portfolios, outside counsel guidelines have long been the primary governance tool.
When corporate legal departments and insurance organizations manage litigation portfolios, outside counsel guidelines have long been the primary governance tool. They set billing rates, define staffing expectations, establish reporting cadence, and outline the boundaries of the engagement. But those guidelines were written for a world where the only variables were people, hours, and paper.
That world is gone.
Today, defense firms are using AI to analyze depositions, draft motions, build timelines, and synthesize case files. Some of those tools preserve privilege and confidentiality. Others don't. Some accelerate defensible work. Others introduce risk that doesn’t surface until after motions are filed with hallucinated cases (yes, even defense firms have been sanctioned).
The question facing legal operations, litigation management, and claims governance teams is no longer whether panel counsel will use AI—it's which AI they'll use, and whether you'll know about it.
Traditional outside counsel guidelines can't answer that question. They weren't built to. What's needed is a new layer: a self-enforcing AI addendum that defines approved systems, restricted uses, and prohibited practices—without micromanaging the substantive work of defense.
This post lays out a practical framework for that addendum, organized around governance as a joint client-firm protocol rather than a compliance checklist. Done right, the addendum becomes a shared operating standard that protects privilege, preserves work product boundaries, and gives both sides clarity on what "approved AI" actually means.
Why Traditional Guidelines Fall Short
Outside counsel guidelines were designed to manage cost, quality, and communication. They work well for those purposes. But AI introduces a different set of risks that billing rules and staffing matrices don't address:
· Privilege waiver. Uploading a privileged document to a public AI system can constitute voluntary disclosure to a third party, potentially waiving privilege.
· Data retention. Many consumer AI tools log prompts and responses indefinitely. Those logs are discoverable.
· Training on client data. Some platforms use client inputs to improve their models, meaning your case facts could inform someone else's output.
These aren't hypothetical concerns. In the New York Times litigation against OpenAI, the court ordered preservation and production of 20 million ChatGPT prompts and responses—a stark reminder that what feels like a private conversation with a machine may be anything but.
The problem is that most outside counsel guidelines say nothing about AI, or they include a vague sentence like "counsel shall use technology responsibly." That's not governance. That's hope.
The Three Buckets: Allowed, Restricted, Prohibited
A defensible AI addendum starts by defining three clear categories. Every AI system falls into one of them, and the classification determines what panel counsel can and cannot do.
Bucket 1: Allowed Uses
These are AI systems that meet your organization's security and confidentiality standards. They should be explicitly named in the addendum, along with the types of work they're approved for. We have long-advocated for firms to adopt AI-use policies specifying tools that can (and cannot) be used; we recommend that clients assigning work do so in their outside counsel guidelines.
Example language:
Approved AI systems for our matters include [identify approved platforms]. These systems have been evaluated for zero data retention (ZDR), encryption in transit and at rest, and SOC 2 compliance. Counsel may use approved systems for [describe approved uses such as deposition analysis, document review, timeline generation, and drafting assistance]. All outputs must be reviewed by a qualified attorney before use in any court filing, client communication, or strategic decision.
The key is specificity. "AI tools" is too broad. Naming the platforms removes ambiguity and gives firms a clear lane to operate in.
Bucket 2: Restricted Uses
Restricted uses are activities where AI may be helpful but only under specified conditions, typically with advance notice or client approval.
Example language:
Counsel may not use AI systems to [list prohibited uses such as process expert reports or draft settlement communications] without prior written approval. If counsel seeks to use an AI system not listed as approved, counsel must provide the case manager with the system's data retention policy, security certifications, and a description of the intended use and obtain written approval before use.
This bucket acknowledges that litigation is dynamic. New tools emerge. Case needs vary. The restriction isn't a prohibition—it's a gate that requires conversation before the gate opens.
Bucket 3: Prohibited Uses
Prohibited uses are non-negotiable. These are the practices that create unacceptable risk, regardless of the tool.
Example language:
Counsel shall not use any public or consumer-grade AI system (including but not limited to ChatGPT, Gemini, Claude, or similar platforms accessed via free or individual subscription accounts) to process any document, transcript, communication, or other material related to our matters. Counsel shall not upload privileged or confidential information to any AI system that does not provide zero-data-retention. Counsel shall not use AI systems that train on user inputs or retain logs of the contents of prompts and responses.
This is where you draw the line. Public and consumer-grade AI tools—no matter how useful—are incompatible with privilege protection in litigation. The addendum should say so plainly.
Human-in-the-Loop Guarantees and Review Expectations
Speed is valuable, but only if the work is defensible. That's why every AI addendum should include an explicit human-in-the-loop requirement.
AI can draft a motion, summarize a deposition, or flag inconsistencies across transcripts. But it cannot replace attorney judgment. The lawyer must review, verify, and take responsibility for the output before it's used.
Example language:
All AI-generated work product must be reviewed and approved by a qualified attorney before use in any filing, client communication, or case strategy decision. Counsel remains responsible for the accuracy, completeness, and appropriateness of all work product, regardless of the tools used to produce it. Counsel shall not rely on AI-generated legal research, case and statutory citations, arguments, or interpretations without independent verification of accuracy.
This language tracks the guidance found in ABA Formal Opinion 512 and more recent state bar opinions which make clear that lawyers using generative AI must exercise competence and supervision. The addendum simply operationalizes that duty in the context of the engagement.
It also protects the client. If something goes wrong—hallucinated case citations, mischaracterized facts, breach of privilege—the responsibility stays with counsel, not the AI tool.
Guardrails to Preserve Privilege and Work Product Boundaries
Privilege is fragile. Once waived, it's nearly impossible to claw back. That's why the AI addendum must include specific guardrails designed to preserve confidentiality and work product protection.
Zero Data Retention (ZDR) as the Foundation
The single most important technical safeguard is zero data retention. In a ZDR system, prompts and responses are processed in real time and immediately deleted. No logs. No training data. No records to subpoena.
Example language:
All AI systems used in our matters must assure zero data retention (ZDR) for all prompts, responses, and uploaded materials. Any system that retains user data, logs interactions, or uses client inputs for model training is prohibited.
ZDR isn't a nice-to-have. It's the baseline for privilege protection in the AI era.
No Third-Party Training on Client Data
Some AI vendors offer "enterprise" plans that don't retain data but still reserve the right to use anonymized inputs for model improvement. That's not good enough for litigation.
Example language:
Counsel shall not use any AI system that trains on client data, whether in identifiable or anonymized form. All vendor agreements must explicitly prohibit the use of client inputs for model training, and counsel shall provide copies of relevant contract provisions upon request.
This provision closes a loophole that many clients are unaware exists.
Transparency
Governance only works if it's verifiable. The utility of specifying which platforms are approved for use is that defense counsel does not need to keep track of each use. However, the addendum should give the client the right to audit unapproved AI usage and require counsel to document what systems were used and for what purpose. This helps ensure that when new AI platforms are introduced to the market, defense counsel will be incentivized to obtain approval before using them.
Example language:
We encourage counsel to identify and request approval to use AI systems that are not currently identified as approved. We reserve the right to audit counsel's use of unapproved AI systems at any time during the engagement. Counsel shall maintain records of which unapproved AI systems were used, for what tasks, and on what dates. Upon request, counsel shall provide a summary of unapproved AI usage for any matter covered by this addendum.
This isn't about distrust. It's about accountability. If a privilege issue arises later, both defense counsel and client need to be able to reconstruct what happened.
From Policy to Practice
Guidelines are only as effective as their enforcement mechanisms. The challenge most legal departments face isn't drafting AI policies—it's ensuring panel counsel follows them across hundreds of active matters.
This is where technology becomes a governance partner. Purpose-built litigation management platforms can operationalize these guidelines by:
Providing real-time visibility – Legal ops teams can monitor compliance across the entire panel without manual reporting requests
Enforcing zero data retention standards – Systems architected for privilege protection ensure no client data is used for model training
Starting small, scaling strategically. Many organizations begin with a focused pilot—selecting 2-3 panel firms handling a specific litigation type (bodily injury, employment defense, or subrogation, for example). This allows the team to:
Test the AI addendum language in a controlled environment
Gather feedback from counsel on what works (and what doesn't)
Demonstrate ROI before broader rollout
Build internal champions among both legal ops and panel counsel
Platforms like esumry, designed specifically for litigation defense with secure, privilege-safe AI capabilities, make phased implementation straightforward. The goal isn't to replace your panel's judgment—it's to give them better tools while giving you better oversight.
The result: governance that doesn't require constant policing, and innovation that doesn't compromise privilege.
The Bottom Line
Outside counsel guidelines were built for a world where the only variables were people and hours. AI changes that. The new variable is systems—and systems introduce risks that billing caps and staffing ratios can't manage.
The solution isn't to ban AI. It's to govern it. That means defining approved systems, restricted uses, and prohibited practices in clear, enforceable language. It means requiring zero data retention and human review. And it means treating AI governance as a joint protocol between client and counsel, not a compliance burden imposed from above.
Done right, the AI addendum doesn't slow defense work down. It speeds it up—by giving panel counsel a clear lane to operate in, with tools that meet your security and privilege standards from the start.
The firms that adopt this framework early will deliver faster, more defensible work. The clients that require it will protect privilege, reduce risk, and maintain oversight without micromanaging every task.
That's what Outside Counsel Guidelines look like in the age of AI. And it starts with a conversation about which AI systems belong in your litigation portfolio—and which ones don't.
Download the AI Addendum Template
Ready to implement an AI addendum for your outside counsel guidelines? We've created a downloadable template that includes all the language above, formatted for easy editing and integration into your existing guidelines.
Download the AI Addendum Template
Want to see how esumry helps defense teams work faster while preserving privilege and meeting the standards outlined in this addendum? Schedule a demo to learn more.
About the Author
James Chapman is a co-founder of esumry and a former defense litigator. He writes about the intersection of AI, litigation strategy, and legal operations.
Using esumry, privilege is protected with ZDR (zero data retention), and case analysis is fast, strategic, and secure. Create timelines, tag testimony, assess credibility, and get ahead of how the other side will use the record—before they do.