A federal appeals court in Washington, DC, on Wednesday rejected Anthropic’s bid to temporarily lift a supply‑chain‑risk designation imposed by the Pentagon, creating an immediate legal standoff with a separate federal judge in San Francisco who ordered the label removed last month. The split leaves the AI company — which says it is the first U.S. firm ever designated under two different supply‑chain laws — in limbo as both sides race toward final rulings that could take months.
A three‑judge panel on the DC Circuit said Anthropic “has not satisfied the stringent requirements” for a stay and called the dispute “unprecedented.” The panel stressed the risk of interfering with military operations, writing that granting a stay “would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict.” The court’s order allows the government to keep Anthropic listed as a supply‑chain risk while litigation proceeds.
The underlying confusion stems from the Pentagon’s use of two similar supply‑chain statutes and parallel litigation in different jurisdictions. The San Francisco judge, deciding one of the challenges last month, concluded the Department of Defense likely acted in bad faith — motivated by frustration with Anthropic’s proposed usage limits and public criticism of those limits — and ordered the designation removed. The administration complied with that order, temporarily restoring access to Anthropic’s Claude model across the Department of Defense and other federal agencies. The DC Circuit ruling, by contrast, addresses the second statutory pathway and keeps the designation in place for now, with no immediate mechanism to reconcile the conflicting preliminary judgments.
Anthropic said it welcomed the appellate court’s speed in moving the case but reiterated confidence in its claims. Danielle Cohen, an Anthropic spokesperson, said the company was “grateful the Washington, DC, court recognized these issues need to be resolved quickly” and remained “confident the courts will ultimately agree that these supply chain designations were unlawful.” Government lawyers, meanwhile, argued the label is necessary to protect military readiness.
Acting Attorney General Todd Blanche framed the DC Circuit stay as a vindication of Pentagon authority. On X, Blanche called the ruling “a resounding victory for military readiness,” adding, “Our position has been clear from the start—our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems. Military authority and operational control belong to the Commander‑in‑Chief and Department of War, not a tech company.” The Trump administration has increasingly used the phrase “Department of War” for the Pentagon, reflecting a change in nomenclature under the current White House.
The cases test how far the executive branch can go in controlling technology companies that supply AI tools to the government. The Pentagon is actively deploying AI in its operations against Iran, a fact the court highlighted in considering the risk of disruption. Anthropic has argued in filings that it has been punished for drawing attention to safety limits in its Claude model and for publicly saying Claude lacks the reliability required for fully autonomous lethal applications such as drone strikes. Some legal experts and AI researchers told Wired they think Anthropic has a strong legal claim, but that judges are often reluctant to override national‑security judgments by the White House.
Practical consequences for Anthropic are already material: the company says the designation has cost it government business, while the government says the labeling prevents the Pentagon and its contractors from using Claude on sensitive projects. The DC Circuit has set oral arguments for May 19, and final decisions in both lawsuits could be months away. Both sides have revealed little about exactly how Claude was integrated into military systems or how far the Pentagon has progressed in migrating personnel to alternatives such as Google DeepMind or OpenAI; the military says it has taken steps to ensure a smooth transition and to prevent any intentional sabotage of its systems during the changeover.
