US military leaders, including Defense Secretary Pete Hegseth, held critical discussions with executives from the artificial intelligence firm Anthropic on Tuesday regarding the use of the company’s advanced AI model. Hegseth has set a deadline for CEO Dario Amodei to reach an agreement with the Department of Defense (DoD) by the end of Friday, with the threat of penalties if terms are not satisfied, as reported by Axios.

Anthropic, known for its commitment to safety in AI, has faced ongoing disputes with the Pentagon about the permissible uses of its large language model, Claude. US defense officials seek unrestricted access to Claude’s capabilities, whereas Anthropic has pushed back against applications that could involve mass surveillance or the deployment of autonomous weapons systems, which could operate without human oversight. The DoD has already begun incorporating Claude into its operations but has warned that it may cut ties with Anthropic over perceived obstacles to the collaboration.

The outcome of these negotiations could set a significant precedent regarding the AI industry’s response to government demands for military applications, an issue that has generated considerable debate among researchers and ethical AI proponents. The DoD has indicated that it may enforce punitive measures against Anthropic if compliance isn’t achieved, which could include terminating a substantial contract and identifying Anthropic as a “supply chain risk.”

In July of last year, the DoD established contracts with several AI firms, including Anthropic, Google, and OpenAI, amounting to as much as $200 million. Until recently, Anthropic’s Claude was the only model authorized for use within military classified systems. A recent agreement was made to allow military personnel access to Elon Musk’s xAI chatbot for classified systems, despite facing controversy for producing inappropriate images.

Both xAI and OpenAI have accepted the government’s usage terms, with reports indicating that OpenAI has permitted its model for “all lawful purposes.” However, the company has yet to respond to inquiries regarding the specifics of its agreement with the government.

The recent meeting comes on the heels of reports that the US military utilized Claude to aid in capturing Venezuelan leader Nicolás Maduro. The integration of AI into military operations has gained momentum, particularly under the Trump administration, where Trump has professed a desire for the US to lead in the potential global AI arms race.

Pentagon chief technology officer Emil Michael has publicly urged Anthropic to comply with the government’s requirements, asserting that if a company wishes to profit from working with the US Department of Defense, the conditions must be aligned with legal use cases.

Amodei has expressed his support for increased regulation in the AI sector, and Anthropic has actively engaged in political action for stronger AI safeguards. Despite opposing Trump during the recent presidential campaign, the company has faced backlash from pro-Trump investors, leading to some withdrawing their financial support.

As the Pentagon invests billions in AI technologies, ranging from autonomous drones to precision targeting systems, the ethical implications of granting significant decision-making autonomy to AI—especially regarding lethality—are becoming increasingly urgent. Current conflicts, such as the ongoing war in Ukraine, highlight the immediacy of these debates, showcasing the deployment of semi-autonomous drones that require minimal human interference.

Popular Categories


Search the website

Exit mobile version