top of page

Generative AI: Legal Ethics to Comply with State Bar Rules When Using It


Imagery of generative AI, legal ethics, judicial gavel and scales of justice

Perhaps you have been considering using generative artificial intelligence (AI) in your litigation practice but want to make sure you are complying with your state bar rules of professional conduct. This post is intended to provide a roadmap for compliance and best practices for using generative AI products and services.


So far, just a handful of state bars have issued ethics guidance on the use of AI. You can review our state-by-state list to check whether yours is one of them. Hyperlinks included!


Whether your state bar has issued guidance on AI, it is wise to understand how bar regulators are evaluating lawyers’ use of generative AI.


What Is Generative AI?


Before diving in, it’s important to understand what is meant by generative AI.


Generative AI is a subset of artificial intelligence technologies that can generate new content, data, or solutions that resemble human-like outputs. It does so by responding to “prompts,” which are instructions or questions submitted by a user. Unlike traditional AI, which typically analyzes and makes decisions based on existing data, generative AI goes further by creating (generating) new data, content, and even images that didn't exist before. One prominent example is OpenAI's ChatGPT, which has been groundbreaking in natural language processing and image generation. We've previously written about using ChatGPT in your litigation practice, which you can read here.


With that context, let’s turn to specific rules of professional conduct to understand the legal ethics of using generative AI. (All references are to the American Bar Association Model Rules of Professional Conduct.)


Rule 1.6: Client confidentiality 


Lawyers using generative AI have a duty to protect their client’s confidential information under Rule 1.6. To comply, it is essential to understand the data handling policies of the generative AI service you intend to use. That’s because when you ask questions or submit prompts that relate to a case you are handling, you can unwittingly send some bits of confidential information about the case to the generative AI service provider in order to receive an answer or response.


If you are using a free version of ChatGPT, Microsoft Bing, or Google Gemini, that creates a confidentiality problem because each of them keeps the data you send and may use your data to train their services. This is explained in the ChatGPT and Gemini privacy policies. In our view, Microsoft offers no clear answer about how it handles your prompts unless you are on a paid plan. So, if you are using Microsoft’s free plans, you should assume they will use whatever you send them for their own purposes.


Adobe recently introduced Adobe AI as a free in-app product that allows users to summarize documents and ask questions about them. Box.com and other document storage services are introducing similar products. Be sure to check the terms of service describing how they utilize your data and prompts before using them with documents that contain client or other confidential information.


Of course, you can carefully craft your questions to avoid revealing client confidences. However, this is difficult in practice because you will find that the more detail you include in your questions, generally the better the answer. And it feels so “conversational” that the temptation is always there to tell it more than you should to remain compliant with Rule 1.6.


So, what should you do?


Under Rule 1.6, it is permissible to reveal confidences with informed consent. One way to comply with the rule is to include an explanation of your use of generative AI in your client engagement agreement, making sure the client understands it, and have it signed before using it in connection with a client's matter.


However, that may not be ideal, since many of these services will retain and potentially use your prompts (including any documents you submit) for training purposes. For this reason, we recommend using practice-specific services that offer additional security for your data and your prompts, especially zero data retention.


For litigation, eSumry provides secure data storage and encryption, along with zero data retention for all prompts and responses when using its generative AI services. eSumry has also successfully completed rigorous auditing for compliance with the SOC2 security framework.


Rules 5.1 and 5.3: Supervision


Supervising lawyers are, well, charged with supervision. It doesn’t matter whether the work is done by a paralegal, a law clerk, or an on-line AI assistant with a clever name. The duty of supervision under Rules 5.1 and 5.3 requires lawyers to review work done by non-lawyers – including work done by generative AI assistants – to ensure it is accurate.


ChatGPT and other services can produce some amazingly convincing responses. It is trained to be confident, and it is never in doubt. At the 2024 ABA TechShow, one of the speakers likened it to a blazingly fast associate attorney who occasionally makes stuff up. (The technical term is “hallucinate.”) According to computer scientists, this is an expected behavior for these large language models.


Two attorneys in New York got into trouble in a Southern District of New York case for filing a brief drafted by ChatGPT that included a raft of fictitious cases. They claimed not to have checked the citations because they seemed so real. Surprisingly, they were only sanctioned $5,000 by the Court. Don’t be that lawyer. Read and carefully check the outputs for accuracy.


One way to reduce the likelihood of hallucinations is to use generative AI services that are specific to your practice. For example, eSumry for litigation offers CaseChat and deposition summaries, generative AI products that help you quickly analyze, summarize, search and query the documents in your case file. Because it is built to focus on the file documents for each of your cases, it can provide highly specific answers and outputs based on them.


Rule 1.5: Fees


A major reason for using generative AI services is to get case insights and create work product faster. For lawyers that handle legal work under fixed-fee or contingent fee arrangements, getting work done more efficiently with generative AI services is a huge boost.


But for lawyers who bill by the hour, perhaps not so much. We addressed this tension with the billable hour revenue model in a recent post.


State bars that have considered the issue have made clear that it is improper under Rule 1.5 to charge the client for the time that was saved by using generative AI services. In our view, this discourages the use of incredibly useful and time-saving technologies by hourly billers. As Stuart Teicher explained in the Keynote CLE at the 2024 ABA TechShow, “Something Gotta Give!”


Client expectations are also rising for the use of generative AI by their lawyers. Having used it themselves, they understand how it saves a lot of time, and in turn, could reduce their legal expense.


Innovative lawyers will stay ahead of this curve and shift their business model away from hourly billing to alternative fee arrangements. At least one state bar has suggested that lawyers consider adopting contingent or flat fee arrangements “so that the benefits of increased efficiency accrue to both the lawyer and client alike.” (Florida Op. 24-1)


So, even if you bill by the hour, don’t risk getting left behind. We recommend discussing the issue with your clients, informing them of the potential time savings when performing their work, exploring new revenue models, and using it as an opportunity to deepen client relationships.


For now, while lawyers may save time on legal tasks by using AI, if they bill by the hour, they must bill honestly and accurately.


Additionally, some lawyers may want to recoup the cost of generative AI services from their clients. At least one state bar has issued guidance that lawyers must notify clients if they intend to charge them for the use of AI tools. 


Rules 7.1, 7.2 and 7.3: Advertising and Solicitation


Using AI chatbots for marketing and client intake raises concerns for compliance with the rules on lawyer advertising and solicitation. Because many AI-powered chatbot agents are quite good, potential clients may believe they are communicating with a lawyer.  As noted above, generative AI language models are trained to be confident, and yet can be extraordinarily wrong.


To comply with Rules 7.1, 7.2 and 7.3, lawyers must ensure that prospective clients are informed they are communicating with an AI-assistant, not a real lawyer. The supervising lawyer rule (Rule 5.3) also comes into play because, without a clear disclaimer to the prospective client, the lawyer is responsible for any false or incorrect information provided by the chatbot. 


What If My State Bar Has Not Issued Guidance on Using Generative AI? 


Even if your state bar has not issued an opinion or guidance on the use of generative AI, it’s prudent to consider what other state bars have published. Of course, their recommendations are based on state-specific rules. While those rules are largely consistent with the ABA model rules of professional conduct, you will want to follow the rules of your jurisdiction to the extent they differ in language or interpretation. 

If in doubt, many state bars offer ethics hotlines that can quickly provide insights applicable to the use of generative AI in your practice. 


The Way Forward


Start by familiarizing yourself with your state bar guidelines and ethics opinions on the use of generative AI. Be sure to check our state-by-state list to see if your state bar has published any guidance.


If your bar hasn’t yet issued guidance, compare the discussion in this post with your own state bar rules of professional conduct. We recommend discussing it with your colleagues to gain additional perspectives. Then you’ll be ready to decide how generative AI can be used in your firm.


Next, it is important to put it in writing. You can check out our template for a generative AI use policy for your law firm.


Once you have a policy in place, we recommend hosting a “lunch and learn” program to review it with everyone, answer any questions, and ensure your team is safely leveraging the use of approved generative AI services.


We hope this post has been helpful for considering the use of generative AI in your practice. If you’d like to learn more about how eSumry can support your litigation practice, click below to learn more.



bottom of page