Law firms are confronting the reality that while artificial intelligence (AI) can draft a brief in seconds, it can also hallucinate legal theories convincing enough to pass traditional review filters undetected.
As generative AI becomes a staple in legal drafting, a new risk has emerged: fabricated legal reasoning rather than merely fabricated facts. The resulting errors are increasingly leading to court-ordered sanctions.
Cat Casey, legal tech expert and partner at Masters AI Legal, said legal theory hallucinations are the trickiest to identify. A quick Westlaw or Lexis search, or even some of the more robust anti-hallucination tools like BriefCatch’s RealityCheck, are not equipped to flag this type of failure.
“A hallucinated legal theory passes every cite check and still blows up your case,” she told TechNewsWorld. “The occurrence of hallucinations in law offices is widespread and is seriously impacting court proceedings.”
Andrew Adams, partner and chief administrative officer at DarrowEverett, agrees that hallucinations and shadow AI — employees using unapproved AI tools — are two serious risks facing the legal industry today.











