By Stephen Van Wert Vanguard Specialty
One Non-Standard Lawyers Professional Liability Program is now attaching a manuscript endorsement that excludes any claim that is based upon any actual or alleged use of generative artificial intelligence (“AI”) by the law firm.
It is especially interesting that such exclusion uses “or alleged” language, which presumably means that the carrier will not even be obligated to provide a defense if a claim allegedly was based upon the insured’s use of AI.
The exclusion reads as follows:
This Policy does not apply to any “claim”, “wrongful act”, “damages” or “defense costs” based upon, arising out of, or in any way involving any actual or alleged use of “generative artificial intelligence” by the “insured”.
As used in this endorsement, “generative artificial intelligence” means any type of artificial intelligence system that generates or produces any form of content, including text, imagery, audio, media or synthetic data in response to training data or user prompts, including but not limited to ChatGPT, Bard, Midjourney or Dall-E.
It is clear that attorneys should exercise caution when integrating AI into their legal practices due to the potential for significant ethical and accuracy concerns. AI tools, while increasingly sophisticated, are not infallible and can introduce errors in legal research, case analysis, or even client communication. A malfunctioning AI system or inaccurate data input can lead to erroneous legal conclusions or misinformed legal strategies, which could adversely impact case outcomes or client representation. Thus, it is imperative that attorneys remain vigilant and ensure that AI systems are rigorously tested and validated before relying on them for critical legal tasks.
Another critical issue is the protection of client confidentiality and the management of sensitive information. AI systems often require access to vast amounts of data, which raises concerns about data security and privacy. Attorneys are bound by strict confidentiality rules and must be diligent in safeguarding client information. AI systems, if not properly secured, could become targets for data breaches or unauthorized access, potentially compromising sensitive client information. Ensuring that AI providers adhere to stringent security protocols and that attorneys themselves implement robust data protection measures is essential to mitigate these risks.
Thus, it is understandable why a carrier would want to exclude this exposure. However, it creates a gap in coverage to the extent that AI is used in the insured’s law practice.