AttPro Tip of the Month – Dec

December 13, 2023

Robot Hand Holding AI

These days, you can’t go to a legal conference or open a legal journal without at least some commentary about AI’s impact on the profession.  By now, you have probably heard about the “ChatGPT lawyer” who cited cases in a brief filed in federal district court that were not real but generated by the chatbot.  A thirty-year practitioner, the attorney admitted he “did not comprehend that ChatGPT could fabricate cases.” In response, some federal court judges have issued standing orders requiring lawyers to certify they did not use artificial intelligence or, if AI was used to draft their filings, a human checked the accuracy of the cases cited therein.  The reasons for not using AI at all or using it in conjunction with traditional research methods such as Westlaw or Lexis are well taken.  As Judge Brantley Starr stated in his Order, the platforms are prone to hallucinations and bias, and these large language models did not take an oath – you did.  

Before writing off AI as not worth the risk –news flash– you may already be using it! If you asked Alexa a question today or spellchecked an email in your Gmail account, you used AI.  And the messaging around the legal world seems to be that AI is here to stay.  Westlaw Precision now has generative AI and Casetext is now marketing CoCounsel as the First AI Legal Assistant that can review documents, prepare legal research memos, and analyze contracts.  Spellbook boasts that it redlines contracts ten times faster with AI.  Without question, these tools could produce a legal research memo in mere seconds that might take a new associate hours to write.  Despite their flaws, these tools can be beneficial if the human (lawyer) touch remains.
We must harness AI for the good of our clients and to maintain the integrity of the courts.  The ABA has taken the lead by forming a Task Force on Law and Artificial Intelligence to examine the impact of AI on the practice of law and the ethical implications for lawyers.  The AI Task Force intends to explore such issues as the risk of bias and privacy, the utilization of AI to increase access to justice, AI governance, and the role of AI in legal education.  While the ABA and other legal policy making organizations will undoubtedly address these issues on a macro level, how can practitioners today use AI in an ethical way?
First, consider your ethical obligations. Lawyers have a duty of candor to the court under Rule 3.3.  As with any case citation you may pull from an older brief or secondary source, you must shepardize it to ensure it is a good (and real) case.  Secondly, be wary of what information you give the chatbot which could compromise our duty of confidentiality under Rule 1.6.  Finally, consider implementing firm-wide policies concerning the use of AI to comply with your duty to supervise junior lawyers who may be using artificial intelligence to generate memos or briefs. This is essential, not only from an accuracy standpoint, but for purposes of training. 

Although the arguable efficiency of ChatGPT producing a legal research memo may be helpful in the short run, it cannot replace the valuable writing skills that are developed when generating a memo “the old-fashioned way.”  So, tread lightly and take all the necessary precautions when using AI to ensure you include valid case law in all of your court filings. 

AttPro Risk Managment Tip of the Month

Do You Have Sufficient Protection?

Ready to protect your professional career with the best malpractice insurance on the market? Contact us today and let our experienced team guide you towards peace of mind. Your success is our priority.