ISLAMABAD (PEN) : In the United States, an increasing number of lawyers are facing judicial scrutiny for submitting court filings containing fabricated legal citations generated by artificial intelligence (AI). These incidents underscore the potential risks associated with unverified AI use in legal practice, a growing concern as the legal industry becomes more reliant on advanced technologies.
One of the most recent cases centers on two attorneys in Wyoming, who are now at risk of sanctions after a federal judge discovered fabricated case citations in a lawsuit filed against retail giant Walmart. The error was traced back to one of the lawyers using an AI tool, which generated the false references. The lawyer admitted that it was an unintentional mistake but nevertheless drew attention to the potential pitfalls of relying on AI without proper verification.
In response to this and other similar incidents, Morgan and Morgan, a major personal injury law firm, issued an internal warning to its attorneys about the dangers of using unverified AI-generated content in legal documents. While the firm refrained from commenting further on the matter, this serves as a cautionary reminder to other firms. Walmart, too, chose not to comment on the issue.
This Wyoming case is far from an isolated incident. According to a Reuters report, at least seven similar cases have emerged in recent years, where courts have disciplined lawyers for using AI-generated legal fiction. A 2023 survey by Thomson Reuters found that 63% of lawyers have used AI in their work, with 12% relying on it regularly. AI tools have increasingly found their way into legal research and drafting processes, with many law firms turning to external AI providers or developing their own proprietary systems. Despite AI’s potential for efficiency, the technology is prone to generating “hallucinations”—fabricated details or fictitious information that lack factual grounding.
These fabricated citations, known as “hallucinations,” are a growing concern in legal practice. AI generates responses based on statistical patterns rather than ensuring factual accuracy, a limitation that raises alarms in high-stakes legal environments. Federal judges have already begun cracking down on AI-related legal errors. One of the earliest instances came in June 2023 when a Manhattan court fined two New York lawyers $5,000 for citing non-existent cases in a personal injury lawsuit against an airline.
Other cases have followed suit. In a lawsuit involving Michael Cohen, former personal attorney to Donald Trump, a New York judge considered imposing sanctions after Cohen unknowingly provided AI-generated citations to his attorney, although no penalties were applied. The judge called the situation “embarrassing.” Similarly, in November 2023, a Texas federal judge fined a lawyer $2,000 for citing fictitious cases in a wrongful termination lawsuit, also requiring the lawyer to complete a course on AI in law. In another case, a federal judge in Minnesota ruled that a misinformation expert had damaged his credibility after admitting to citing AI-generated references in a case involving a parody of Vice President Kamala Harris.
Legal ethics mandate that attorneys verify and stand by the accuracy of their submissions to the court. The American Bar Association (ABA) has reinforced this stance, advising its 400,000 members that even unintentional errors originating from AI tools could lead to disciplinary actions. The ABA’s warning highlights the serious repercussions of using AI without rigorous fact-checking.
Experts are now calling attention to the broader issue of AI literacy within the legal profession. Andrew Perlman, Dean of Suffolk University Law School, stressed that lawyers must verify the information produced by AI tools to avoid appearing incompetent. “When lawyers use ChatGPT or other AI tools to generate citations without verifying them, that’s incompetence, pure and simple,” Perlman remarked.
Legal scholars such as Harry Surden, a University of Colorado law professor who specializes in AI and law, argue that the problem lies not in the technology itself, but in how lawyers are using it. AI is merely exposing a deeper issue—lawyers’ inadequate verification practices. Surden emphasized that while AI tools can be valuable in streamlining tasks like research and drafting, they are not infallible. Lawyers must take the time to understand the limitations and strengths of these technologies to ensure they are used responsibly.
The rise of AI in law offers undeniable potential for efficiency, but its reliability remains a pressing concern. As courts continue to tighten scrutiny, lawyers may be forced to reconsider how they integrate AI into their practices—or face serious professional consequences. The legal profession, like many others, is navigating a new technological landscape, and the stakes for ensuring accuracy and ethical integrity have never been higher.
AI’s increasing role in the legal field is transforming how lawyers approach their work, but the incidents of fabricated citations serve as a critical reminder: Technology must be wielded with care, especially when the consequences of error can be far-reaching. Lawyers must adapt, learn, and critically engage with AI tools to safeguard their practice and their profession.