openai

OpenAI reaches deal with Pentagon after Trump drops Anthropic

OpenAI creator Sam Altman testifies before the Senate Commerce, Science, and Transportation Committee on Capitol Hill on May 8 in Washington, D.C. He announced Friday that his company would provide artificial intelligence models to the Pentagon. File Photo by Anna Rose Layden/UPI | License Photo

Feb. 28 (UPI) — OpenAI announced it secured a deal to provide artificial intelligence services to the Defense Department hours after the Trump administration directed all federal agencies to stop using those provided by Anthropic.

OpenAI is the San Francisco-based tech research company founded by Sam Altman, Elon Musk and others behind applications including ChatGPT and DALL-E.

“Tonight, we reached an agreement with the Department of War to deploy our models in their classified work,” OpenAI CEO Altman said late Friday in a post on X.

The Pentagon had previously used Anthropic’s AI model Claude in much of its classified work, including its operation to capture Venezuelan President Nicolas Maduro.

Contract negotiations between the tech company and the Defense Department soured after the Trump administration demanded it be allowed to use the AI system for “all lawful purposes.” Anthropic, though, wanted certain guardrails in place to prevent the government from using its AI system for surveilling Americans or to create autonomous weapons.

Friday evening, President Donald Trump directed all federal agencies to stop using Anthropic, accusing it of being a “radical left, woke company” attempting “to dictate how our great military fights and wins wars!”

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump wrote in a post on Truth Social.

In his post on X, Altman said OpenAI’s agreement with the Defense Department includes similar protections against domestic surveillance and weapons sought by Anthropic.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he said. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

The New York Times reported that unlike Anthropic, OpenAI included in its contract with the Pentagon phrasing that allows the government to use its AI product for all lawful purposes.

Fortune reported that Altman told OpenAI employees that the government is allowing the company to build its own “safety stack” and that if the AI model refuses to allow the government to do a certain task, the government won’t force it to.

Source link

OpenAI hauls in $110B in third round of funding

Open AI CEO Sam Altman, pictured at the White House in January, said on Friday that new deals with Amazon, Nvidia and Softbank will allow it to continue to grow with enough computing power to develop its products and applications. Photo by Aaron Schwartz/UPI | License Photo

Feb. 27 (UPI) — Amazon, Nvidia and Softbank collectively invested $100 billion dollars in OpenAI, which is double the amount of money the company raised in a 2025 funding round and pushes it to a $730 billion pre-money valuation.

The funding round is one of the largest private funding rounds in history — $50 billion from Amazon, $30 billion from Nvidia and $30 billion from Softbank — and OpenAI said the fundraising round has not closed, with more investors expected, CNBC and TechCrunch reported.

The new investment from Amazon builds on the existing $38 billion multi-year agreement the two companies already have, which OpenAI said in a press release is planned to expand by $100 billion over the next 8 years.

“We’re super excited about this deal,” OpenAI CEO Sam Altman said in an interview with CNBC. “AI is going to happen everywhere. It’s transforming the whole economy, and the world needs a lot of collective computing power to meet the demand.”

At the core of the broadening collaboration between Amazon and OpenAI is the development of a Stateful Runtime Environment that runs on Amazon Web Services.

AWS also will be the exclusive third-party vendor for OpenAI Frontier, an enterprise platform for AI agent deployment.

The power required to run AI systems continues to grow exponentially, and the Amazon deal also will allow OpenAI to start building custom AI applications — most notably, one for Amazon’s customer-facing applications.

In a joint statement with Microsoft, which OpenAI has been working with since 2019, the two companies said that the new investment deals will not affect their relationship and that the partnership “remains strong and central.”

Microsoft Azure remains the exclusive cloud provider of stateless OpenAI APIs and OpenAI Frontier will continue to be hosted on Azure, the companies said.

Senate Majority Leader John Thune, R-S.D., speaks during a press conference after the weekly Republican Senate caucus luncheon at the U.S. Capitol on Wednesday. Photo by Bonnie Cash/UPI | License Photo

Source link

Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon

President Trump on Friday directed federal agencies to stop using technology from San Francisco artificial intelligence company Anthropic, escalating a high-profile clash between the AI startup and the Pentagon over safety.

In a Friday post on the social media site Truth Social, Trump described the company as “radical left” and “woke.”

“We don’t need it, we don’t want it, and will not do business with them again!” Trump said.

The president’s harsh words mark a major escalation in the ongoing battle between some in the Trump administration and several technology companies over the use of artificial intelligence in defense tech.

Anthropic has been sparring with the Pentagon, which had threatened to end its $200-million contract with the company on Friday if it didn’t loosen restrictions on its AI model so it could be used for more military purposes. Anthropic had been asking for more guarantees that its tech wouldn’t be used for surveillance of Americans or autonomous weapons.

The tussle could hobble Anthropic’s business with the government. The Trump administration said the company was added to a sweeping national security blacklist, ordering federal agencies to immediately discontinue use of its products and barring any government contractors from maintaining ties with it.

Defense Secretary Pete Hegseth, who met with Anthropic’s Chief Executive Dario Amodei this week, criticized the tech company after Trump’s Truth Social post.

“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he wrote Friday on social media site X.

Anthropic didn’t immediately respond to a request for comment.

Anthropic announced a two-year agreement with the Department of Defense in July to “prototype frontier AI capabilities that advance U.S. national security.”

The company has an AI chatbot called Claude, but it also built a custom AI system for U.S. national security customers.

On Thursday, Amodei signaled the company wouldn’t cave to the Department of Defense’s demands to loosen safety restrictions on its AI models.

The government has emphasized in negotiations that it wants to use Anthropic’s technology only for legal purposes, and the safeguards Anthropic wants are already covered by the law.

Still, Amodei was worried about Washington’s commitment.

“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he said in a blog post. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”

Tech workers have backed Anthropic’s stance.

Unions and worker groups representing 700,000 employees at Amazon, Google and Microsoft said this week in a joint statement that they’re urging their employers to reject these demands as well if they have additional contracts with the Pentagon.

“Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression,” the statement said.

Anthropic’s standoff with the U.S. government could benefit its competitors, such as Elon Musk’s xAI or OpenAI.

Sam Altman, chief executive of OpenAI, the company behind ChatGPT and one of Anthropic’s biggest competitors, told CNBC in an interview that he trusts Anthropic.

“I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” he said. “I’m not sure where this is going to go.”

Anthropic has distinguished itself from its rivals by touting its concern about AI safety.

The company, valued at roughly $380 billion, is legally required to balance making money with advancing the company’s public benefit of “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”

Developers, businesses, government agencies and other organizations use Anthropic’s tools. Its chatbot can generate code, write text and perform other tasks. Anthropic also offers an AI assistant for consumers and makes money from paid subscriptions as well as contracts. Unlike OpenAI, which is testing ads in ChatGPT, Anthropic has pledged not to show ads in its chatbot Claude.

The company has roughly 2,000 employees and has revenue equivalent to about $14 billion a year.

Source link