The US defence secretary designated the AI company a ‘supply chain risk’ after it refused to remove guardrails on its technology.
Published On 18 Mar 202618 Mar 2026
The administration of United States President Donald Trump has said in a court filing that the Pentagon’s blacklisting of Anthropic was justified and lawful, opposing the artificial intelligence company’s high-stakes lawsuit challenging the decision.
The administration made its comments in a court filing on Tuesday.
Recommended Stories
list of 3 itemsend of list
Defense Secretary Pete Hegseth designated Anthropic, the maker of popular AI assistant Claude, a national security supply chain risk on March 3 after the company refused to remove guardrails against its technology being used for autonomous weapons and domestic surveillance.
The Trump administration’s filing says Anthropic is unlikely to succeed in its claims that the US government’s action violated speech protections under the US Constitution’s First Amendment, asserting that the dispute stems from contract negotiations and national security concerns, not retaliation.
“It was only when Anthropic refused to release the restrictions on the use of its products — which refusal is conduct, not protected speech — that the President directed all federal agencies to terminate their business relationships with Anthropic,” the administration’s legal filing said. The filing, from the US Justice Department, said that “no one has purported to restrict Anthropic’s expressive activity”.
Anthropic’s lawsuit in California federal court asks a judge to block the Pentagon’s decision while the case plays out. Some legal experts say the company appears to have a strong case that the government overreached.
In a statement, Anthropic said it was reviewing the government’s filing. The company said that “seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”
The White House did not immediately respond to a request for comment.
Supply chain risk
Trump has backed Hegseth’s move, which excludes Anthropic from a limited set of military contracts. But it could damage the company’s reputation and cause billions of dollars in losses this year, according to its executives.
The designation came after months of negotiations between the Pentagon and Anthropic reached an impasse, prompting Trump and Hegseth to denounce the company and accuse it of endangering American lives with its use restrictions.
Anthropic has disputed those claims and said AI is not yet safe enough to be used in autonomous weapons. The company said it opposes domestic surveillance as a matter of principle.
In its March 9 lawsuit, Anthropic said that the “unprecedented and unlawful” designation violated its free speech and due process rights, while running afoul of a law requiring federal agencies to follow specific procedures when making decisions.
The Pentagon separately designated Anthropic a supply chain risk under a different law that could expand the order to the entire government.
Anthropic is challenging that move in a second lawsuit in a Washington, DC, appeals court.
Feb. 27 (UPI) — President Donald Trump announced he was directing all federal agencies to stop using Anthropic’s artificial intelligence solutions even as it’s the only one being used in the military’s classified systems.
Trump lashed out on his social media network Truth Social Friday.
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military,” he posted.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” he wrote.
But an unnamed source told Axios that despite the president’s post, Anthropic and the Pentagon were still negotiating ahead of the 5:01 p.m. deadline that Defense Secretary Pete Hegseth set.
Undersecretary of Defense for Research and Engineering Emil Michael called Anthropic CEO Dario Amodei a liar on X. “It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the U.S. Military and is OK putting our nation’s safety at risk,” he posted on X. “The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
Trump’s post continued: “Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” the president said.
“WE will decide the fate of our Country – NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
Anthropic’s AI model Claude was used to capture Venezuelan presidentNicolas Maduro and could be used in Iran, if needed. Axios reported that defense officials praised Claude’s abilities and one admitted that it would be a “huge pain in the ass” to stop using it.
Anthropic faced a 5 p.m. EST deadline to comply with the Pentagon and allow it to use the company’s artificial intelligence system without restraint.
If Anthropic declines, Hegseth has said he will have the company labeled a “supply chain risk” or invoke the Defense Production Act to force it to agree.
In July, Anthropic signed a $200 million contract with the Pentagon, but Amodei insists that its AI model Claude not be used for mass surveillance in the United States or for autonomous weapons without human approval.
The Defense Department has said it doesn’t plan to use the tools in that manner, but that Anthropic doesn’t get to make those decisions. It says the U.S. government can use tools “for all lawful purposes.”
“Legality is the Pentagon’s responsibility as the end user,” a senior Pentagon official told NPR.
On Thursday, Amodei said the company wouldn’t agree to the Pentagon’s terms.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies and to defeat our autocratic adversaries,” he wrote in a statement. “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases [domestic surveillance and autonomous weapons] have never been included in our contracts with the Department of War, and we believe they should not be included now.”
Amodei acknowledged Hegseth’s threats to blacklist the company or force it to comply.
“These threats do not change our position: we cannot in good conscience accede to their request,” Amodei wrote. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
Pentagon spokesperson Sean Parnellsaid on X Thursday: “Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes. This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions. They have until 5:01 p.m. ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW [Department of Defense].”
Geoffrey Gertz, a senior fellow at the Center for a New American Security, told NPR that both measures the Pentagon threatened are unusual and contradictory.
“It’s this funny mix where they both are such a risk that they need to be kicked out of all systems, and so essential that they need to be compelled to be part of the system no matter what,” he said.
Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology, told CNBC: “There are no winners in this. It leaves a sour taste in everyone’s mouth.”
Senate Majority Leader John Thune, R-S.D., speaks during a press conference after the weekly Republican Senate caucus luncheon at the U.S. Capitol on Wednesday. Photo by Bonnie Cash/UPI | License Photo
OpenAI creator Sam Altman testifies before the Senate Commerce, Science, and Transportation Committee on Capitol Hill on May 8 in Washington, D.C. He announced Friday that his company would provide artificial intelligence models to the Pentagon. File Photo by Anna Rose Layden/UPI | License Photo
Feb. 28 (UPI) — OpenAI announced it secured a deal to provide artificial intelligence services to the Defense Department hours after the Trump administration directed all federal agencies to stop using those provided by Anthropic.
OpenAI is the San Francisco-based tech research company founded by Sam Altman, Elon Musk and others behind applications including ChatGPT and DALL-E.
“Tonight, we reached an agreement with the Department of War to deploy our models in their classified work,” OpenAI CEO Altman said late Friday in a post on X.
The Pentagon had previously used Anthropic’s AI model Claude in much of its classified work, including its operation to capture Venezuelan President Nicolas Maduro.
Contract negotiations between the tech company and the Defense Department soured after the Trump administration demanded it be allowed to use the AI system for “all lawful purposes.” Anthropic, though, wanted certain guardrails in place to prevent the government from using its AI system for surveilling Americans or to create autonomous weapons.
Friday evening, President Donald Trump directed all federal agencies to stop using Anthropic, accusing it of being a “radical left, woke company” attempting “to dictate how our great military fights and wins wars!”
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump wrote in a post on Truth Social.
In his post on X, Altman said OpenAI’s agreement with the Defense Department includes similar protections against domestic surveillance and weapons sought by Anthropic.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he said. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
The New York Times reported that unlike Anthropic, OpenAI included in its contract with the Pentagon phrasing that allows the government to use its AI product for all lawful purposes.
Fortune reported that Altman told OpenAI employees that the government is allowing the company to build its own “safety stack” and that if the AI model refuses to allow the government to do a certain task, the government won’t force it to.
President Trump on Friday directed federal agencies to stop using technology from San Francisco artificial intelligence company Anthropic, escalating a high-profile clash between the AI startup and the Pentagon over safety.
In a Friday post on the social media site Truth Social, Trump described the company as “radical left” and “woke.”
“We don’t need it, we don’t want it, and will not do business with them again!” Trump said.
The president’s harsh words mark a major escalation in the ongoing battle between some in the Trump administration and several technology companies over the use of artificial intelligence in defense tech.
Anthropic has been sparring with the Pentagon, which had threatened to end its $200-million contract with the company on Friday if it didn’t loosen restrictions on its AI model so it could be used for more military purposes. Anthropic had been asking for more guarantees that its tech wouldn’t be used for surveillance of Americans or autonomous weapons.
The tussle could hobble Anthropic’s business with the government. The Trump administration said the company was added to a sweeping national security blacklist, ordering federal agencies to immediately discontinue use of its products and barring any government contractors from maintaining ties with it.
Defense Secretary Pete Hegseth, who met with Anthropic’s Chief Executive Dario Amodei this week, criticized the tech company after Trump’s Truth Social post.
“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he wrote Friday on social media site X.
Anthropic didn’t immediately respond to a request for comment.
Anthropic announced a two-year agreement with the Department of Defense in July to “prototype frontier AI capabilities that advance U.S. national security.”
The company has an AI chatbot called Claude, but it also built a custom AI system for U.S. national security customers.
On Thursday, Amodei signaled the company wouldn’t cave to the Department of Defense’s demands to loosen safety restrictions on its AI models.
The government has emphasized in negotiations that it wants to use Anthropic’s technology only for legal purposes, and the safeguards Anthropic wants are already covered by the law.
Still, Amodei was worried about Washington’s commitment.
“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he said in a blog post. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
Tech workers have backed Anthropic’s stance.
Unions and worker groups representing 700,000 employees at Amazon, Google and Microsoft said this week in a joint statement that they’re urging their employers to reject these demands as well if they have additional contracts with the Pentagon.
“Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression,” the statement said.
Anthropic’s standoff with the U.S. government could benefit its competitors, such as Elon Musk’s xAI or OpenAI.
Sam Altman, chief executive of OpenAI, the company behind ChatGPT and one of Anthropic’s biggest competitors, told CNBC in an interview that he trusts Anthropic.
“I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” he said. “I’m not sure where this is going to go.”
Anthropic has distinguished itself from its rivals by touting its concern about AI safety.
The company, valued at roughly $380 billion, is legally required to balance making money with advancing the company’s public benefit of “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”
Developers, businesses, government agencies and other organizations use Anthropic’s tools. Its chatbot can generate code, write text and perform other tasks. Anthropic also offers an AI assistant for consumers and makes money from paid subscriptions as well as contracts. Unlike OpenAI, which is testing ads in ChatGPT, Anthropic has pledged not to show ads in its chatbot Claude.
The company has roughly 2,000 employees and has revenue equivalent to about $14 billion a year.
Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.
Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.
“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”
Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.
Those are two red lines that seem rather reasonable, even to Claude.
However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.
Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.
(Tom Williams/CQ-Roll Call, Inc via Getty Images)
The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.
Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.
Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.
Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.
Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).
Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”
He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.
“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”
For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.
Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”
Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?
Help, Claude! Make it make sense.
If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.
Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.
“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.
I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?
“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”
OK then.
“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”
You know who can provide that legitimacy? Our elected leaders.
It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.
Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”
Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.
Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.
Because when the machine tells us it’s dangerous to trust it, we should believe it.