AI company wins fair use challenge over authors


This recording was made using enhanced software.

Summary

Fair use interpretation

A court ruled in favor of Anthropic's argument that using copyrighted books to train its AI models was transformative and constituted fair use.

Piracy allegations

While Anthropic succeeded on the fair use argument for AI training, the court found that downloading and maintaining a digital library of over seven million pirated books was not legally permissible.

Precedent for AI lawsuits

This is not the first legal battle involving artificial intelligence and copyright, referencing previous and ongoing lawsuits such as Thomson Reuters suing Ross Intelligence and Disney and Universal's suit against Midjourney.


Full story

Artificial intelligence company Anthropic scored a major victory in court that could impact dozens of similar lawsuits. Despite the legal win, the company still faces piracy claims for allegedly pirating books to start a digital library.

Fair use argument

A group of authors filed a class-action lawsuit against Anthropic in 2024, claiming the company had violated their copyrights by using their books without permission.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

The company used those works to train its language model named Claude, a competitor to popular AI tools like OpenAI’s ChatGPT and Google’s Gemini.

Anthropic argued for fair use, which allows limited use of copyrighted material without permission for news reporting, education and other purposes.

One of the major ways a court determines fair use is to see if the use of the copyrighted works was “transformative.” That means it’s not a substitute for the original work, but something new.

“The technology at issue was among the most transformative many of us will see in our lifetimes,” Judge William Alsup wrote in his summary judgment. “The use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use under Section 107 of the Copyright Act.”

Split decision

Despite Alsup’s ruling in favor of Anthropic’s fair use, the judge also ruled the piracy of the books was not legally excusable.

What that basically means is, Anthropic’s downloading and building a library out of pirated books is not allowed, but using the books to train the AI tool is.

“Before buying books for its central library, Anthropic downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies in its library even after deciding it would not use them to train its AI,” Judge Alsup wrote. “The downloaded pirated copies used to build a central library were not justified by a fair use.”

There will now be a trial on the piracy allegations, which could prove very costly for Anthropic. The lowest statutory damage for this kind of copyright infringement is $750 per book.

The judge’s summary says Anthropic pirated 7 million books, which means the damages could get up to over $5 billion for the company.

A recent report from Reuters shows Anthropic makes about $3 billion in annualized revenue.

AI lawsuits

This is far from the first lawsuit involving AI and fair use but it is one of the first times the fair use argument has succeeded.

In a different lawsuit, Thomson Reuters sued AI startup Ross Intelligence, alleging Ross used Westlaw’s headnotes to train Ross’ AI legal-research engine. Westlaw is a subsidiary of Reuters and a prominent legal research platform.

Ross attempted to argue fair use, but the court said several of the fair use factors went against Ross. The court said Ross’ use of Westlaw’s headnotes harmed the market for Westlaw and their derivative products.

Now, with one ruling going against fair use and this most recent ruling in favor of fair use, the challenges could become appropriate for the U.S. Supreme Court to consider.

In a recent filing, Disney and Universal filed suit against Midjourney. The Hollywood powerhouses claim that Midjourney trained its AI image generators using copyrighted works, including images from Marvel, The Simpsons and more.

Midjourney has not yet commented on the lawsuit. In another lawsuit, Getty Images filed against Stability AI for allegedly using Getty’s images to train Stability’s AI tool, which generates images from text inputs.

Cole Lauterbach (Managing Editor), Zachary Hill (Video Editor), and Devin Pavlou (Digital Producer) contributed to this report.
Tags: , , ,

Why this story matters

A federal court ruling that partially upholds the use of copyrighted books for AI training under fair use, while allowing piracy claims to proceed, could set important precedents for the legal boundaries of artificial intelligence development and copyright law.

AI and copyright law

The ruling addresses how existing copyright law applies to emerging artificial intelligence technologies, influencing future AI research and content creation.

Fair use determination

Judge William Alsup’s decision that Anthropic’s use of books for AI training is 'exceedingly transformative' and falls under fair use clarifies a contentious legal issue and may guide other courts addressing similar disputes.

Ongoing legal challenges

The unresolved piracy claims and reference to additional lawsuits against AI firms highlight the ongoing complexity and high financial stakes of litigation in the rapidly evolving AI sector.

Get the big picture

Synthesized coverage insights across 40 media outlets

Community reaction

Writers and creative professionals have expressed concern about how the ruling could impact their livelihoods, arguing that AI companies using their works without permission jeopardizes future income. Some technology and AI industry advocates view the judge's decision as a milestone toward clarifying legal boundaries for AI development and see it as promoting innovation, according to industry statements reported.

Context corner

The fair use doctrine cited in this case was established in U.S. copyright law decades before AI and large-scale data training existed. Previous court debates about fair use have centered on transformative use, meaning works used for new purposes rather than simple replication. The rapid growth in generative AI has prompted new legal challenges that reinterpret longstanding legal frameworks.

Global impact

This court ruling may serve as a precedent in the U.S., influencing ongoing and future legal cases against AI companies using copyrighted materials for large language model training, including global firms like OpenAI and Meta. However, as some sources note, different jurisdictions—such as the UK, which uses ‘fair dealing’ instead of ‘fair use’—may interpret these issues differently worldwide.

Bias comparison

  • Media outlets on the left frame the ruling as a pivotal “fair use” victory for AI innovation, emphasizing the judge’s characterization of training on purchased books as “transformative” and spotlighting Anthropic’s shift away from piracy, thereby downplaying concerns about creator harm.
  • Media outlets in the center adopt a balanced, factual tone, labeling the decision “balanced” and highlighting legal complexity.
  • Media outlets on the right underscore the ruling as a damaging “digital free-for-all,” employing charged language like “piracy” and “stolen” to amplify fears of rampant copyright erosion and exploitation of authors.

Media landscape

Click on bars to see headlines

119 total sources

Key points from the Left

  • A federal judge ruled that Anthropic's training of AI models on legally purchased books is fair use, according to Judge William Alsup's decision.
  • A separate trial will address allegations that Anthropic pirated millions of books from the internet.
  • The ruling marks the first favorable outcome for the AI industry in a copyright case, potentially influencing future cases.
  • Judge Alsup noted that while training AI was fair use, keeping pirated copies is not, and a trial for damages will proceed.

Report an issue with this summary

Key points from the Center

  • U.S. District Judge William Alsup ruled late Monday that Anthropic legally made fair use of copyrighted books to train its AI model Claude in San Francisco.
  • The ruling followed a lawsuit filed last year by authors alleging Anthropic used pirated books without permission to build its AI product, marking a key test for fair use in AI training.
  • Alsup found Anthropic's AI training transformative and lawful but ordered a December trial on whether storing pirated copies in a central library violated copyrights and caused damages.
  • Alsup wrote the AI system's distillation of thousands of works was "quintessentially transformative," while Anthropic bought a copy of a stolen book, which may affect damages but not liability.
  • The decision supports AI companies’ use of copyrighted content under fair use but leaves unresolved issues about acquisition practices, indicating ongoing legal challenges for the industry.

Report an issue with this summary

Key points from the Right

  • Federal Judge William Alsup ruled that Anthropic's use of copyrighted books to train its AI models qualifies as "fair use."
  • This is the first time a court has explicitly supported AI training on copyrighted works without creator consent.
  • While Alsup ruled the training itself lawful, he allowed a trial examining Anthropic's use of pirated materials to proceed.
  • This decision potentially impacts numerous lawsuits against companies like OpenAI, Meta and Google.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™