—
**Legal Representatives of MyPillow CEO Mike Lindell Under Investigation After Utilizing AI to Create Inaccurate Legal Brief**
The attorneys for MyPillow CEO and notable election conspiracy advocate Mike Lindell are facing criticism after presenting a legal brief fraught with inaccuracies—issues that arose from their use of generative AI. While the lawyers admitted to employing AI, they contend that the errors were ultimately due to human error.
On Wednesday, Colorado District Court Judge Nina Wang issued a directive pointing out nearly 30 incorrect citations in a brief submitted on February 25 by attorneys Christopher Kachouroff and Jennifer DeMaster from the firm McSweeney Cynkar and Kachouroff. This submission was part of a defamation case initiated against Lindell by former Dominion Voting Systems employee Eric Coomer.
“These issues encompass, but are not confined to, erroneous quotations of referenced cases; misinterpretations of legal tenets; inaccuracies regarding the origin of case law as stemming from authoritative sources like the Tenth Circuit; incorrect attributions of cases to this District; and, most alarmingly, references to non-existent legal cases,” Judge Wang noted.
The court observed that the lawyers were afforded an opportunity to clarify the errors but did not provide an adequate explanation. When directly questioned, Kachouroff conceded to using generative AI to create the brief and acknowledged that he had not verified the citations generated by the system.
Consequently, the court mandated the attorneys to justify why they should not face disciplinary actions for breaching professional conduct standards, in addition to explaining why they, together with Lindell and their firm, should not face sanctions.
**Attorneys Attribute Filing Mistake to Human Error**
In a response submitted on Friday, the attorneys asserted they were initially unaware of the inaccuracies and were taken by surprise when queried by the court. Upon further examination, they contended that an incorrect version of the brief—a draft laden with AI-generated mistakes—had inadvertently been filed by DeMaster.
They presented alternative versions of the brief and an email correspondence between Kachouroff and DeMaster discussing revisions to bolster their case.
“At that time, counsel had no reason to suspect that an AI-generated or unverified draft had been submitted,” the response expressed. “After the hearing and upon further scrutiny, it became immediately evident that the document filed was not the accurate version. It was an earlier draft, submitted inadvertently due to human error.”
The attorneys also defended the integration of AI in legal practice, claiming that the use of AI tools is appropriate when executed correctly. Kachouroff elaborated that he frequently utilizes AI platforms like Microsoft’s Co-Pilot, Google’s Gemini, and X’s Grok to aid in legal research, although he mentioned he is the sole practitioner at his firm who does so. Notably, he also stated that he had never encountered the term “generative artificial intelligence” prior to this incident.
The lawyers have sought permission to refile a corrected version of the brief and have requested the court to dismiss the potential disciplinary measures.
**A Rising Issue of AI Errors in the Legal Sector**
This case contributes to an increasing number of instances where legal professionals have mishandled AI technology. In June 2023, two attorneys faced penalties for referencing fictitious cases generated by ChatGPT. Later in the year, a lawyer representing former Trump attorney Michael Cohen was found to have cited nonexistent cases produced by Google’s Bard AI. Furthermore, in February, another lawyer appeared to reference fabricated cases created by ChatGPT, prompting the law firm Morgan & Morgan to alert its staff about the hazards of relying on AI without proper validation.
Despite these cautionary accounts, it appears many in the legal field have yet to fully recognize the dangers of inappropriately using AI in their professional activities.
—