President Donald Trump on Friday called for US federal agencies to stop using Anthropic’s Claude AI after the company refused to grant the Department of Defense permission to use it for mass domestic surveillance or for fully autonomous weapons systems.
The president posted on the Truth Social platform, which he owns, that he is ordering the federal government to “IMMEDIATELY CEASE” use of Anthropic’s tools, saying there would be a six-month phaseout for agencies like the Department of Defense. He also decried Anthropic as a “RADICAL LEFT, WOKE COMPANY.” The post signaled the latest step in a showdown that escalated significantly this week between Anthropic and the federal government.
Claude is widely used across the Pentagon, including in classified systems, but the Trump administration has sought to use the technology for “any lawful purpose.” Anthropic has insisted in its existing contract that the technology not be used for mass surveillance of Americans or in autonomous offensive weapons systems without human input.
Earlier this week, Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that he would invoke seldom-used powers to either force Anthropic to let the Pentagon use Claude for any lawful purpose or label the company a supply chain risk — jeopardizing its use by the government or defense contractors. Hegseth gave Anthropic a Friday deadline to comply.
Anthropic CEO Dario Amodei said in a statement that the company, which was founded with a stated focus on AI safety, “cannot in good conscience accede to [the Pentagon’s] request” that it remove contract provisions stating Claude cannot be used in fully autonomous weapons systems or for domestic surveillance.
Amazon’s Ring Cameras Push Deeper Into Police and Government Surveillance
Worries about AI and mass surveillance
Amodei raised concerns that the law has not caught up with the potential for mass surveillance of Americans. The government can already buy information like Americans’ browsing history and records of individual movements without a warrant, but artificial intelligence raises the stakes. “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale,” he wrote.
Michael Pastor, dean for technology law programs at New York Law School, said in an email that it’s typical in contract law for those involved to seek clarity on terms. “Anthropic is right to press hard on what ‘for lawful purposes’ means,” he said. “If the Pentagon is unwilling to clarify whether it would use Anthropic’s technology for mass domestic surveillance, that raises flags Anthropic seems justified in waving.”
Anthropic’s Claude is reportedly the most widely used AI system by the US military. Alternatives could include tools from OpenAI, Google or Elon Musk’s xAI.
In an internal memo reported by The Wall Street Journal Friday, OpenAI CEO Sam Altman reportedly told employees that the company has the same red lines as Anthropic — no mass domestic surveillance or autonomous offensive weapons. Altman said he believed those guardrails could be managed through technical requirements, such as requiring models to be deployed in the cloud. (Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Employees of Google and OpenAI circulated a petition calling for their companies to stand with Anthropic in refusing to allow the use of AI models for domestic mass surveillance or fully autonomous lethal weapons systems. The petition said the Pentagon is “trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”
Just as in consumer technology, artificial intelligence systems have seen widespread adoption in government and military cases. These tools have seen their capabilities grow significantly just in the past few years, and that pace of change has not slowed. Regulation and oversight of AI haven’t kept up. AI has magnified the potential harms of corporate or government surveillance by making it easier and cheaper.
Pastor said this dispute could have significant ramifications for what leverage governments and tech companies have against each other when their views on the appropriate use of technology clash. “Anthropic may feel that yielding here opens a Pandora’s box of uses for which Claude could be deployed,” he said.
Read the full article here
