Anthropic Tells Appeals Court It Cannot Control Claude Once It Is Deployed in Pentagon Networks
Anthropic on April 22, 2026, told a federal appeals court that it cannot manipulate its Claude system once it is running inside classified Pentagon networks, sharpening a legal fight that began after the Trump administration moved to restrict the company’s government work. The filing gives the court a more technical picture of the dispute: once the model is embedded in a military environment, Anthropic argues, the company no longer has the kind of live control the government is implying it does.
Anthropic’s April 22 filing narrows the dispute
The company’s 96-page submission to the U.S. Court of Appeals in Washington, D.C., is part of a lawsuit filed in March 2026 over the Pentagon’s treatment of Anthropic’s AI systems. Anthropic says the government has wrongly portrayed Claude as a tool the company can still steer after deployment, while the company says the model’s operation inside secure defense networks limits what the vendor can do from the outside.
That distinction matters because the case is not only about commercial access to federal contracts. It also tests how regulators and agencies describe control, responsibility and risk when an AI model is integrated into classified systems rather than used as a standalone consumer product.
Military use and AI oversight are now colliding
The dispute stems from the Pentagon’s concerns about how AI could be used in fully autonomous weapons and in surveillance applications. In earlier court filings, Anthropic said the administration’s designation of the company as a supply chain risk was retaliatory and legally improper. The new filing shifts the emphasis toward deployment mechanics, suggesting that the government’s argument depends on a level of vendor control that Anthropic says does not exist once the model is inside defense infrastructure.
Anthropic has simultaneously been fighting separate court battles over the government’s actions, including a San Francisco case in which a judge blocked part of the Pentagon’s punishment and an appeals-court proceeding in Washington that recently refused to shield the company from the fallout while evidence is still being gathered.
Why the filing matters for AI vendors
The case is one of the clearest recent tests of how advanced AI systems are regulated when they move from cloud software into national security settings. If courts accept Anthropic’s framing, future disputes may hinge less on abstract fears about AI autonomy and more on the concrete architecture of who can modify, monitor or disable a model once it is inside a government environment.
For AI companies chasing defense contracts, that is an operational question as much as a legal one. It goes to logging, access controls, model updates and the division of responsibility between the vendor and the agency running the system.
The appeals court has not yet resolved the broader dispute, but Anthropic’s filing makes clear that the fight now centers on what control looks like after deployment, not just on what the software can do in the abstract.
Source: Associated Press
Date: 2026-04-22T23:45:26Z