Skip to content

Anthropic, an AI company, has announced plans to invest a substantial sum of $1.5 billion into the hands of authors.

Settlement discussions ongoing for one of several copyright lawsuits targeting AI companies, with Anthropic seeking to limit a potential massive payment.

Anthropic AI firm plans to distribute a substantial $1.5 billion to authors
Anthropic AI firm plans to distribute a substantial $1.5 billion to authors

Anthropic, an AI company, has announced plans to invest a substantial sum of $1.5 billion into the hands of authors.

In a significant development, AI firm Anthropic has proposed a settlement of at least $1.5 billion to authors suing them for illegally downloading their works. The proposal comes in response to allegations that around 500,000 books and other texts were used to train Anthropic's AI chatbot, Claude.

Anthropic, based in San Francisco, is facing multiple lawsuits from copyright holders for using their works in AI training. The settlement, if approved by a judge, would amount to approximately $3,000 (around 2,500 Euro) per affected work. However, it does not include video content.

The legal troubles for Anthropic began when the judge in San Francisco initially ruled that their use of copyrighted texts could potentially fall under the "fair use" principle. This principle allows limited use of copyrighted material without permission for purposes such as criticism, commentary, news reporting, teaching, scholarship, or research.

However, the judge concluded that the downloading of the two pirate libraries was not covered under the "fair use" principle. This determination could lead to penalties of up to $150,000 per book in a trial. The judge's ruling put pressure on Anthropic to reach a settlement to avoid potentially higher penalties.

Claude, one of the most successful competitors to OpenAI's popular chatbot, ChatGPT, was trained via two pirate online databases. Anthropic aims to prevent a trial where the startup could be ordered to pay significantly higher amounts.

It is important to note that the "fair use" principle does not apply to the downloading of pirated material. The settlement proposal by Anthropic is a response to the potential penalties in a trial, not an admission of guilt.

The plaintiffs have accepted the proposal, but it needs approval from a judge in San Francisco to become valid. Apple, also based in San Francisco, is currently being sued by rights holders for the unlawful use of works in training their AI systems.

AI programs are trained using vast amounts of information to generate meaningful responses to user queries. The training process involves using a diverse range of texts, including books, articles, and other written works. The use of copyrighted material without permission can lead to legal issues, as seen in Anthropic's case.

This settlement marks a significant step in the ongoing debate about the use of copyrighted material in AI training. It underscores the importance of obtaining permission and adhering to copyright laws to avoid potential legal consequences.

Read also: