• Since May 1, judges have called out at least 23 examples of AI hallucinations in court records.
  • Legal researcher Damien Charlotin’s data shows fake citations have grown more common since 2023.
  • Most cases are from the US, and increasingly, the mistakes are made by lawyers, not laypeople.

Judges are catching fake legal citations more frequently, and it’s increasingly the fault of lawyers over-relying on AI, new data shows.

Damien Charlotin, a legal data analyst and consultant, created a public database of 120 cases in which courts found that AI hallucinated quotes, created fake cases, or cited other apparent legal authorities that didn’t exist. Other cases in which AI hallucinates might not draw a judge’s attention, so that number is a floor, not a ceiling.

While most mistakes were made by people struggling to represent themselves in court, data shows that lawyers and other professionals working with them, like paralegals, are increasingly at fault. In 2023, seven out of 10 cases in which hallucinations were caught were made by so-called pro se litigants, and three were the fault of lawyers; last month, legal professionals were found to be at fault in at least 13 of 23 cases where AI errors were found.

“Cases of lawyers or litigants that have mistakenly cited hallucinated cases has now become a rather common trope,” Charlotin wrote on his website.

The database includes 10 rulings from 2023, 37 from 2024, and 73 from the first five months of 2025, most of them from the US. Other countries where judges have caught AI mistakes include the UK, South Africa, Israel, Australia, and Spain. Courts around the world have also gotten comfortable punishing AI misuse with monetary fines, imposing sanctions of $10,000 or more in five cases, four of them this year.

In many cases, the offending individuals don't have the resources or know-how for sophisticated legal research, which often requires analyzing many cases citing the same laws to see how they have been interpreted in the past. One South African court said an "elderly" lawyer involved in the use of fake AI citations seemed "technologically challenged."

In recent months, attorneys in high-profile cases working with top US law firms have been caught using AI. Lawyers at the firms K&L Gates and Ellis George recently admitted that they relied partly on made-up cases because of a miscommunication among lawyers working on the case and a failure to check their work, resulting in a sanction of about $31,000.

In many of the cases in Charlotin's database, the specific AI website or software used wasn't mentioned. In some cases, judges concluded that AI had been used despite denials by the parties involved. However, in cases where a specific tool was mentioned, ChatGPT is mentioned by name in Charlotin's data more than any other.

Charlotin didn't immediately respond to a request for comment.

Read the original article on Business Insider