anthropic

Anthropic’s $1.5-billion settlement signals new era for AI and artists

Chatbot builder Anthropic agreed to pay $1.5 billion to authors in a landmark copyright settlement that could redefine how artificial intelligence companies compensate creators.

The San Francisco-based startup is ready to pay authors and publishers to settle a lawsuit that accused the company of illegally using their work to train its chatbot.

Anthropic developed an AI assistant named Claude that can generate text, images, code and more. Writers, artists and other creative professionals have raised concerns that Anthropic and other tech companies are using their work to train their AI systems without their permission and not fairly compensating them.

As part of the settlement, which the judge still needs to be approve, Anthropic agreed to pay authors $3,000 per work for an estimated 500,000 books. It’s the largest settlement known for a copyright case, signaling to other tech companies facing copyright infringement allegations that they might have to pay rights holders eventually as well.

Meta and OpenAI, the maker of ChatGPT, have also been sued over alleged copyright infringement. Walt Disney Co. and Universal Pictures have sued AI company Midjourney, which the studios allege trained its image generation models on their copyrighted materials.

“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said Justin Nelson, a lawyer for the authors, in a statement. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

Last year, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson sued Anthropic, alleging that the company committed “large-scale theft” and trained its chatbot on pirated copies of copyrighted books.

U.S. District Judge William Alsup of San Francisco ruled in June that Anthropic’s use of the books to train the AI models constituted “fair use,” so it wasn’t illegal. But the judge also ruled that the startup had improperly downloaded millions of books through online libraries.

Fair use is a legal doctrine in U.S. copyright law that allows for the limited use of copyrighted materials without permission in certain cases, such as teaching, criticism and news reporting. AI companies have pointed to that doctrine as a defense when sued over alleged copyright violations.

Anthropic, founded by former OpenAI employees and backed by Amazon, pirated at least 7 million books from Books3, Library Genesis and Pirate Library Mirror, online libraries containing unauthorized copies of copyrighted books, to train its software, according to the judge.

It also bought millions of print copies in bulk and stripped the books’ bindings, cut their pages and scanned them into digital and machine-readable forms, which Alsup found to be in the bounds of fair use, according to the judge’s ruling.

In a subsequent order, Alsup pointed to potential damages for the copyright owners of books downloaded from the shadow libraries LibGen and PiLiMi by Anthropic.

Although the award was massive and unprecedented, it could have been much worse, according to some calculations. If Anthropic were charged a maximum penalty for each of the millions of works it used to train its AI, the bill could have been more than $1 trillion, some calculations suggest.

Anthropic disagreed with the ruling and didn’t admit wrongdoing.

“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel for Anthropic, in a statement. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

The Anthropic dispute with authors is one of many cases where artists and other content creators are challenging the companies behind generative AI to compensate for the use of online content to train their AI systems.

Training involves feeding enormous quantities of data — including social media posts, photos, music, computer code, video and more — to train AI bots to discern patterns of language, images, sound and conversation that they can mimic.

Some tech companies have prevailed in copyright lawsuits filed against them.

In June, a judge dismissed a lawsuit authors filed against Facebook parent company Meta, which also developed an AI assistant, alleging that the company stole their work to train its AI systems. U.S. District Judge Vince Chhabria noted that the lawsuit was tossed because the plaintiffs “made the wrong arguments,” but the ruling didn’t “stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

Trade groups representing publishers praised the Anthropic settlement on Friday, noting it sends a big signal to tech companies that are developing powerful artificial intelligence tools.

“Beyond the monetary terms, the proposed settlement provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models,” said Maria Pallante, president and chief executive of the Association of American Publishers in a statement.

The Associated Press contributed to this report.

Source link

Anthropic to start training AI models from users’ chat conversations

Aug. 29 (UPI) — Anthropic plans to start training its artificial intelligence models with user data, one day after announcing a hacker used Claude to identify 17 companies vulnerable to attack and obtained sensitive information.

The company is asking all users of Claude to decide by Sept. 28 whether they want their conversations used for the process. Anthropic will retain data for up to five years, according to a blog post by the company on Thursday.

Anthropic, a public AI research and development company headquartered in San Francisco, was founded in 2021 by seven OpenAI leaders and researchers who left because of disagreements over safety policies. OpenAI is a rival company.

In 2023, Amazon invested $4 billion and Google $2 billion in the company.

Claude debuted in March 2023 with the latest version, Claude 4, introduced in May. Claude has approximately 18.9 million monthly users active users worldwide. There are free and direct use plans that cost as much as $30 per month per user.

Users of the affected consumer products include Claude Free, Pro and Max plans. Not applicable are Claude for Work, Claude Gov, Claude for Education, or application programming interface use, including third parties that include Amazon Bedrock and Google Cloud’s Vertex AI.

Previously, users were told their prompts and conversations would be deleted automatically from the company’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies. In the latter case, a user’s inputs and outputs might be retained for up to two years.

“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” the company said. “You’ll also help future Claude models improve at skills like coding, analysis and reasoning, ultimately leading to better models for all users.

The company noted users are “always in control of this setting and whether we use your data in this way.”

New users can select a preference in the sign-up process. Existing ones will see the choice in a pop-up window. To avoid accidentally clicking “accept,” the following message is in larger letters: “Updates to Consumer Terms and Policies.”

Changes will go into effect immediately.

After Sept. 28, users will need to make their selection on the model training setting to continue using Claude.

The five years of data retention will only apply to new or resumed chats and coding sessions, “and will allow us to better support model development and safety improvements,” the company said.

Also, their privacy will be guaranteed.

“To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” the company said. “We do not sell users’ data to third parties.

Connie Loizos, a writer for TechCrunch, explained why the policy changed.

“Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand,” Loizos said. “Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.”

The Federal Trade Commission, when Joe Biden was president, warned on Jan. 9, 2024, that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print — they risk running afoul of the law.

The current FTC has only three members.

On Wednesday, Anthropic said an unnamed hacker “used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials and penetrating networks.” In cyber extortion, hackers steal sensitive user information or trade secrets.

A hacker convinced Claude Code, which is Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests, to identify companies vulnerable to attack. Claude created malicious software to actually steal sensitive information from the companies. It organized the hacked files and analyzed them to help determine what was sensitive and could be used to extort the victim companies.

Targeted were healthcare, emergency services, and governmental and religious institutions. The person threatened to publicly expose the data unless a ransom of up to $500,000 was paid, the company said.

The company also said it discovered that North Korean operatives had been using Claude to fraudulently secure and maintain remote employment positions at U.S. Fortune 500 technology companies to generate profit for the North Korean regime.

“Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions,” the company said.

The company said it updated preventive safety measures.

Source link

Anthropic launches AI advisory council to boost ties with Washington | Business and Economy News

The AI company’s new council comes a month after the Pentagon signed a deal with several AI companies to develop tools for defence.

The artificial intelligence company Anthropic launched a National Security and Public Sector Advisory Council in efforts to deepen ties with Washington and allied governments as AI becomes increasingly central to defence.

The San Francisco-based start-up announced the new panel on Wednesday.

The council’s launch underscores AI firms’ growing efforts to shape policies and ensure their technology supports democratic interests amid global competition.

Anthropic’s new effort comes as rivals, such as OpenAI and Google DeepMind, step up engagement with governments and regulators on AI safety, though neither has announced a dedicated national security advisory council.

Anthropic’s council brings together former senators and senior officials from the US Department of Defense, intelligence agencies, as well as the Departments of Energy and Justice.

It will advise Anthropic on integrating AI into sensitive government operations while shaping standards for security, ethics and compliance.

Its members include Roy Blunt, a former senator and intelligence committee member, David S Cohen, a former deputy CIA director, and Richard Fontaine, who leads the Center for a New American Security.

Other appointees held top legal and nuclear security roles across Republican and Democratic administrations.

Anthropic said the group will advise on high-impact applications in cybersecurity, intelligence analysis and scientific research, while helping set industry standards for responsible AI use.

The company plans to expand the council as partnerships with public-sector institutions grow.

Last month, the Pentagon established a $200m programme to develop AI tools for defence, highlighting the sector’s push to balance innovation with security risks. The initiative reflects intensifying global competition over AI capabilities, with Washington seeking to maintain an edge against rivals, such as China and Russia.

The effort, which includes Anthropic, OpenAI, Alphabet – Google’s parent company, and xAI – the AI company championed by Elon Musk.

Source link

AI company Anthropic settles with authors who alleged piracy

Artificial intelligence startup Anthropic has settled a class action copyright infringement lawsuit, in which authors accused the company of training its AI models on their work without permission, according to a Tuesday court filing.

San Francisco-based Anthropic, which trained its AI assistant Claude using copyrighted books, was sued by authors Andrea Bartz, Charles Graeber and Kirk Wallace in August 2024.

Terms of the settlement were not disclosed. The company did not immediately respond to a request for comment.

Last June, U.S. District Judge William Alsup ruled that AI models could legally learn from copyrighted books without the authors’ consent. The decision was a partial win for Anthropic.

Alsup found the usage to be “exceedingly transformative” and “a fair use,” though the company might have broken the law by pirating a large portion of its source material. According to the filing in the U.S. Court of Appeals for the 9th Circuit, the tech company and the involved authors asked the court to pause its proceedings while they finalize the settlement deal.

“Fair use” doctrine, which allows for the limited reproduction of copyrighted material without consent in some circumstances, is a key component of AI companies’ defenses against copyright claims.

Alsup originally ordered the matter to go to trial in December to decide how much they would pay in piracy damages. If it went to trial, the damages could have reached up to $150,000 per case of willful copyright infringement and could have cost the startup billions. In early August, the AI company attempted to get an appeal and was denied.

Anthropic, founded by former OpenAI employees and backed by Amazon, pirated at least 7 million books from Books3, Library Genesis and Pirate Library Mirror, online libraries containing unauthorized copies of copyrighted books, to train its software, according to the judge.

They also bought millions of print copies in bulk and stripped the books’ bindings, cut their pages and scanned them into digital and machine-readable forms, which Alsup found to be in the bounds of fair use, according to the judge’s ruling.

“The mere conversion of a print book to a digital file to save space and enable searchability was transformative for that reason alone,” he wrote.

Anthropic later purchased the books it initially pirated. Alsup said the purchases did not absolve the company, but that they could reduce damages.

This agreement is being reached as many other copyright cases against AI companies are being brought to courts around the country. Most recently, Walt Disney Co. and Universal Pictures sued artificial intelligence company Midjourney, which the studios allege trained its image generation models on their copyrighted materials.

Source link