AI in Legal Practice: What Bar Rules Actually Say About Using ChatGPT in Your Cases
May 3, 2026 · 9 min read · By ShieldDrop Legal Research Team
Attorneys across every practice area are using AI. Some are winning with it. Others are getting sanctioned. The difference almost always comes down to whether they understand three ABA Model Rules — and one critical distinction that bar ethics committees keep emphasizing.
Rule 1.1: Competence Now Includes Technology
ABA Model Rule 1.1 has always required attorneys to provide competent representation. In 2012, the ABA amended Comment 8 to add that competence includes keeping abreast of "changes in the law and its practice, including the benefits and risks associated with relevant technology."
Twenty-six states have adopted some version of this language. In those jurisdictions, failing to understand how AI tools work — their hallucination risks, data retention policies, and output limitations — could itself constitute a competence violation.
Rule 1.6: Confidentiality Is the Big One
ABA Model Rule 1.6(a) prohibits attorneys from revealing information relating to the representation of a client unless the client gives informed consent. Rule 1.6(c) further requires attorneys to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure" of such information.
When you paste client facts into a general-purpose AI tool like ChatGPT or Claude, you are transmitting that information to a third-party server. Depending on the platform and its data retention settings:
- Your input may be retained for model training (if you haven't opted out)
- It may be reviewed by human contractors as part of AI safety review processes
- It may be accessible via future legal process served on the AI provider
- It may be included in a data breach if the provider's infrastructure is compromised
New York City Bar Association Formal Opinion 2024-5 addressed this directly: attorneys must evaluate each specific AI tool against Rule 1.6, including reviewing the provider's terms of service and data processing agreements.
Rule 5.3: You Are Responsible for AI Conduct
Rule 5.3 requires attorneys to supervise non-attorney assistants — and multiple bar opinions have now concluded that AI tools fall within this supervision obligation. The attorney using the AI is responsible for the output, including any errors, hallucinations, or fabricated citations.
This has already produced sanctions. In Mata v. Avianca (SDNY, 2023), the court sanctioned attorneys who filed a brief citing six nonexistent cases generated by ChatGPT, finding that their failure to verify the citations was a violation of Rule 11. The court specifically noted that "the duty to review" extends to AI-generated content.
The Safe/Unsafe Line: A Practical Framework
Based on the current state of bar opinions across multiple jurisdictions, here's a working framework for categorizing AI use:
- Drafting templates with no client data
- Research starting points (verified independently)
- Organizing your own notes and thinking
- Generating cross-exam question frameworks from sanitized facts
- AI tools with zero data retention and API-mode processing
- Summarizing public court records
- Pasting real client names and facts
- Submitting AI-generated citations without verification
- Using consumer AI tools (free ChatGPT, free Claude)
- Transcribing confidential calls with tools that retain audio
- Filing AI-drafted documents without attorney review
- Using AI for jurisdiction-specific procedural guidance
Key State-Specific Guidance
Several state bars have issued formal opinions that go beyond the ABA model rules:
How ShieldDrop Is Built for Bar Compliance
ShieldDrop AI tools (CaseBrief, TrialMind, LexAI, VaultDictate) are designed with Rule 1.6 compliance in mind:
- Zero retention: Case materials, transcripts, and AI inputs are processed in RAM and discarded immediately — never written to disk or stored.
- No training on your data: API-mode processing means your inputs are never used to train any model.
- Explicit AI disclaimer on every output: Every analysis includes a disclaimer reminding attorneys to verify all citations and legal conclusions independently.
- Pseudonym-first design: We recommend and document the practice of using pseudonyms in AI tools — reducing the Rule 1.6 exposure of even the worst-case scenario.