Generative AI and Legal Ethics

Generative AI and Legal Ethics

The legal industry stands at a pivotal juncture where the integration of generative AI (GenAI) tools offers transformative potential. From automating routine tasks to enhancing legal research and drafting, GenAI promises increased efficiency and accessibility. However, this technological advancement also brings forth ethical dilemmas and professional responsibilities that legal practitioners must navigate carefully.

The ABA Opinion 512

Recognizing the profound implications of AI in legal practice, the American Bar Association (ABA) issued Formal Opinion 512 in July 2024. This landmark opinion provides a comprehensive framework addressing lawyers' ethical obligations when employing GenAI tools. Key takeaways include:

  1. Competence (Rule 1.1): Lawyers must understand the capabilities and limitations of AI tools to ensure competent representation.
  2. Confidentiality (Rule 1.6): Safeguarding client information remains paramount, necessitating caution when inputting data into AI systems.
  3. Supervision (Rules 5.1 & 5.3): Attorneys are responsible for overseeing both human and technological assistants, ensuring compliance with ethical standards.
  4. Communication (Rule 1.4): Clear communication with clients about the use of AI tools and their implications is essential.
  5. Fees (Rule 1.5): Billing practices must reflect the actual work performed, considering the role of AI in service delivery.

These guidelines underscore that while AI can be a valuable asset, it does not absolve lawyers from their fundamental ethical duties.

The Perils of Over-reliance: AI mishaps have spotlighted the dangers of uncritical reliance on AI-generated outputs

This list includes some of the legal industry’s most recognizable names. Many more examples have been reported, and it’s likely that others remain unpublicized. It’s a cautionary tale: even elite firms and highly skilled attorneys can fall prey to the seductive, yet often misunderstood, promise of generative AI.

  1. Butler Snow LLP: The firm faced judicial scrutiny after submitting court filings containing fictitious case citations produced by ChatGPT. The presiding judge considered sanctions, emphasizing the gravity of presenting false information to the court. Reuters reported as follows: "A Butler Snow partner used ChatGPT to find cases supporting what he thought was a well-established legal position, the response said. He inserted the citations into a draft brief without verifying their accuracy. 'These citations were ‘hallucinated’ by ChatGPT in that they either do not exist and/or do not stand for the proposition for which they are cited,' the response said."
  2. Morgan & Morgan: In Wadsworth v. Walmart Inc., U.S. District Judge Kelly H. Rankin sanctioned three attorneys—Rudwin Ayala, T. Michael Morgan, and Taly Goody—for submitting motions in limine that cited eight non-existent cases generated by an AI tool. The court found that all three attorneys violated Rule 11 of the Federal Rules of Civil Procedure. Ayala, who drafted the motions using the firm's in-house AI tool, had his pro hac vice admission revoked and was fined $3,000. Morgan and Goody, who signed the filings without proper review, were each fined $1,000. The court acknowledged their candor and policy reforms as mitigating factors.
  3. K&L Gates and Ellis George LLP: A court-appointed special master imposed $31,100 in sanctions against both firms for submitting a supplemental brief with inaccurate and fictitious legal citations. An attorney at Ellis George used AI tools to create an outline, which was then shared with K&L Gates. The brief incorporated this unverified material and was filed without due diligence. Judge Michael Wilner referred to the situation as a "collective debacle," emphasizing the need for proper oversight and verification when using AI-generated content.
  4. Latham & Watkins: In one instance, Reuters previously reported, Latham & Watkins admitted including a footnote in an expert report that cited and linked to a real article but with an AI-generated fake title and incorrect authors.

These cases underscore a critical lesson: speed and innovation cannot come at the expense of professional responsibility. The failure to maintain a human-in-the-loop approach when using AI tools not only undermines trust, but also exposes legal practitioners to real malpractice risk. Technology cannot replace judgment, and these incidents demonstrate what happens when that boundary is ignored.

Rethinking the Shame Game: From Tool Use to Professional Mastery

Let’s be clear: this is not a cautionary tale meant to reject AI. On the contrary, it is a call to embrace it with intention and a good plan/process in place.

In fact, what we would like to see is the opposite. Indeed, we need a change in the quiet, persistent shaming within the legal profession directed at those who choose to integrate generative AI into their workflows. This bias suggests that lawyers using AI are somehow less diligent, less serious, or less authentic. That is a flawed and outdated view that misses the larger point entirely. Mastery in law has never meant avoiding tools. It means knowing how to use them wisely, ethically, and effectively.

Consider a skilled carpenter using a power tool. In untrained hands, it’s a liability. In expert hands, it enhances precision and productivity. The same tool can build a masterpiece or a mess—the outcome depends on the craftsperson, not the equipment.

Or think of a concert pianist performing on a Steinway. The piano itself does not produce great music. It's the artist’s technique, interpretation, and judgment that create a memorable performance. Generative AI functions the same way: it's not the performance—it’s the instrument. In the hands of a thoughtful lawyer, it can elevate work without compromising integrity.

In both examples, the takeaway is clear. The presence of advanced tools does not weaken the profession. Misuse or overreliance without critical thinking does.

The real threat to legal ethics is not the adoption of AI, but the careless use of it without verification or accountability. When attorneys submit inaccurate filings or fabricated case law, the fault lies not with the software but with the lawyer who failed to exercise professional judgment. That is not innovation. It is malpractice.

Rather than shaming lawyers who adopt AI responsibly, the legal field should be promoting higher standards of competence. We should be encouraging lawyers to understand how these tools work, how they can be used effectively, and how to supervise their outputs. When used skillfully, AI is not a shortcut. It is a force multiplier. Its value is determined by the discipline and discernment of the professional using it.

Embracing AI: A Call for Responsible Integration

The path forward involves embracing AI's benefits while maintaining rigorous ethical standards. Legal professionals should:

  1. Educate Themselves: Understand AI tools' functionalities and limitations.
  2. Implement Verification Protocols: Establish processes to cross-check AI-generated content.
  3. Foster a Culture of Innovation: Encourage open discussions about AI use, sharing best practices and lessons learned.
  4. Advocate for Clear Guidelines: Support the development of policies that delineate acceptable AI use in legal contexts.

By adopting a balanced approach, the legal community can harness AI's potential to enhance service delivery, improve access to justice, and uphold the profession's ethical foundations.

Go To Top