Due to refusing to open "all permissions," Anthropic is at risk of being shut down by the US Department of Defense, prompting xAI to quickly "fill in the gap."

date
09:42 27/02/2026
avatar
GMT Eight
Due to fundamental disagreements between the two parties on the ethics clauses of AI military applications, the Pentagon issued a final ultimatum to Anthropic: they must unconditionally accept the government's proposed "all lawful uses" clause by 5:01 PM local time on Friday, or they will be listed on the "supply chain risk" blacklist and may face compulsory constraints under the Defense Production Act of the Cold War era.
The conflict between Anthropic and the US Department of Defense will face a final deadline this Friday. Due to fundamental differences in ethical terms for AI military applications, the Pentagon has issued an ultimatum to Anthropic: they must unconditionally accept the government's proposed "all lawful uses" clause by 5:01 PM local time this Friday, or they will be placed on a "supply chain risk" blacklist and may face mandatory constraints under the Defense Production Act from the Cold War era. The standoff originated from two core safety red lines set by Anthropic for its AI model Claude in military applications: prohibiting the use for mass surveillance of US citizens, and prohibiting integration into fully autonomous weapon systems. Despite Anthropic being a supplier to the Pentagon and signing a $200 million defense contract in July of last year, their Claude model was the first AI tool approved for deployment on a government classified network. However, the company insists that they cannot accept the latest compromise proposal from the Department of Defense that goes against their principles. "We cannot in good conscience agree to their demands," said Anthropic CEO Dario Amodei in a statement on Thursday, stating that the Pentagon's threats will not change the company's stance. However, the US Department of Defense is also not budging. Chief spokesperson Sean Parnell explicitly stated that the military wants to use AI tools within legal limits, but should not be subjected to unilateral conditions by any company. He rejected Anthropic's concerns about mass surveillance and autonomous weapons, stating that the Department of Defense has no intention of engaging in such activities, with mass surveillance being illegal. He emphasized that this is a "simple and sensible request" aimed at preventing Anthropic from jeopardizing critical military operations. "We will not allow any company to dictate our operational decision-making," Parnell issued a final warning on the social platform X. This standoff swiftly changed the competitive landscape for US defense AI. While Anthropic refused to compromise, Elon Musk's AI company xAI has reached an agreement with the Pentagon, allowing their Grok model access to classified military systems for intelligence analysis, weapon development, and battlefield operations. xAI has accepted the "all lawful uses" standard rejected by Anthropic. Industry insiders point out that while it is technically complex to fully replace the deeply integrated Claude with Grok in classified systems, the Pentagon is accelerating negotiations with other AI companies including OpenAI and Alphabet Inc. Class C (GOOGL.US). Currently, Grok, Alphabet Inc. Class C Gemini, and OpenAI's ChatGPT have been approved for use in non-classified military systems. This game of control over AI is no longer just a commercial contract dispute. If the Pentagon successfully invokes the Defense Production Act or sanctions Anthropic through the "supply chain risk" label - which would prohibit all defense contractors from using their products - this would set a dangerous precedent for US government intervention in AI ethical standards. Analysts warn that this move could effectively deprive US AI companies of the right to set independent security restrictions in defense applications, pushing the AI arms race into an uncharted territory lacking checks and balances.