All over the country, lawyers are using artificial intelligence to write briefs and help them prepare for court. It is not going well.
A family in Alabama lost a trust dispute last month because their lawyer filed citations to cases that do not exist. The Alabama Supreme Court dismissed their appeal, calling the conduct egregious, and barred the lawyer from filing in that court again without co-counsel sign-off.
In the same month, a federal judge in Oregon sanctioned two lawyers $110,000, the largest AI hallucination penalty in American legal history, after they submitted 23 fabricated citations and eight invented quotations. The case was subsequently dismissed.
In Manhattan, a judge ruled recently that a defendant who used a general-purpose AI chatbot to help prepare his case had waived attorney-client privilege. If you type your defense strategy into a chatbot, the government can subpoena it, read it, and use it against you.
According to a database compiled by lawyer and data scientist Damien Charlotin, there have now been more than 1,300 cases globally where a court or tribunal has commented on AI-generated hallucinations in legal filings. Behind each of those cases is a client who paid a lawyer and trusted the system. Behind each, more often than not, sits a lawyer who placed blind trust in a technology that generates text with complete confidence and no capacity for self-verification.








