Featured

Lawyer for Anthropic Apologizes for Fake Legal Citation Generated by His Client’s Own Claude AI

robot in a dunce cap
Bing AI generator

A lawyer representing AI company Anthropic admitted to including an inaccurate citation generated by the company’s Claude chatbot in an ongoing legal dispute with music publishers.

TechCrunch reports that in a filing made in a Northern California court on Thursday, Anthropic acknowledged that its AI system Claude “hallucinated” a legal citation used by the company’s lawyers. The imaginary citation included “an inaccurate title and inaccurate authors,” according to the filing.

The admission came as part of Anthropic’s response to allegations raised earlier this week by lawyers representing Universal Music Group and other music publishers. The publishers accused Anthropic’s expert witness, company employee Olivia Chen, of using the Claude AI to cite fake articles in her testimony in their ongoing lawsuit against the AI firm.

Following these accusations, Federal judge Susan van Keulen ordered Anthropic to address the claims of fabricated citations. In their Thursday filing, Anthropic’s lawyers explained that their “manual citation check” failed to catch the errors produced by Claude’s hallucinations. They offered an apology for the mistake, characterizing it as “an honest citation mistake and not a fabrication of authority.”

The legal dispute between Anthropic and the music publishers is one of several recent cases in which copyright owners have challenged tech companies over the alleged misuse of their intellectual property to train generative AI systems. As AI technologies advance rapidly, the legal and ethical implications of using copyrighted data to develop these tools remain a complex and contentious issue.

This incident also highlights the growing trend of lawyers turning to AI assistance in their work, sometimes with problematic results. Just this week, a California judge criticized two law firms for submitting “bogus AI-generated research” in his court. In January, an Australian lawyer faced scrutiny for using OpenAI’s ChatGPT to help prepare court documents, only to have the chatbot generate faulty citations.

Breitbart News previously reported on a major law firm warning its entire staff about the dangers of bogus information generated by AI:

In response to the incident, Ayala was immediately removed from the case and replaced by his supervisor, T. Michael Morgan, Esq. Morgan expressed “great embarrassment” over the fake citations and agreed to pay all fees and expenses related to Walmart’s reply to the erroneous court filing. He emphasized that this incident should serve as a “cautionary tale” for both his firm and the legal community as a whole.

Morgan added, “The risk that a Court could rely upon and incorporate invented cases into our body of common law is a nauseatingly frightening thought.” He later admitted that AI can be “dangerous when used carelessly.”

The use of AI in the legal field has become increasingly common, with a July 2024 Reuters survey revealing that 63 percent of lawyers have used AI and 12 percent use it regularly. However, the incident at Morgan & Morgan highlights the importance of responsible AI use and the need for lawyers to independently verify the information generated by these tools.

Despite these high-profile errors, the potential for AI to streamline and enhance legal work continues to attract significant investment. Startup Harvey, which develops generative AI models specifically designed to assist lawyers, is reportedly seeking to raise over $250 million at a valuation of $5 billion.

Read more at TechCrunch here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

via May 16th 2025