A growing number of AI-created flaws found in legal documents submitted to courts have brought attorneys under increased scrutiny.

Courts across the country have sanctioned attorneys for misuse of open-source LLMs like OpenAI’s ChatGPT and Anthropic’s Claude, which have made up “imaginary” cases, suggested that attorneys invent court decisions to strengthen their arguments, and provided improper citations to legal documents.

Experts tell Fortune more of these cases will crop up—and along with them steep penalties for the attorneys who misuse AI.

Damien Charlotin, a lawyer and research fellow at HEC Paris, runs a database of AI hallucination cases. He’s tallied 376 cases to date, 244 of which are U.S. cases.

“There is no denying that we were on an exponential curve,” he told Fortune.