The U.S. military’s recent involvement with Anthropic’s AI model, Claude, has come into the spotlight as reports indicate that it was used during the ongoing military operation against Iran. This revelation follows Donald Trump’s abrupt decision to sever ties with Anthropic just hours before the attacks commenced, highlighting the tensions surrounding AI technology in military operations.
The Wall Street Journal and Axios reported that the U.S. military leveraged Claude for essential intelligence tasks, including selecting targets and conducting battlefield simulations as part of the extensive joint bombardment with Israel that started on Saturday. This use of AI raises questions about the feasibility of the military’s quick disengagement from advanced technologies that have become deeply integrated into their operations.
In a controversial statement made on Truth Social, Trump criticized Anthropic, labeling it a “Radical Left AI company” managed by individuals disconnected from reality. His directive to federal agencies to stop using Claude immediately reflects the ongoing conflict that arose after the AI’s involvement in military actions, particularly the January raid aimed at capturing Venezuelan President Nicolás Maduro. Anthropic has since expressed strong objections, emphasizing that their ethical guidelines prohibit using Claude for violent purposes or surveillance.
The fallout from this incident has strained relations between Trump, the Pentagon, and Anthropic. Defense Secretary Pete Hegseth publicly condemned the company, accusing it of “arrogance and betrayal.” In a robust defense of military needs, he insisted on unrestricted access to Anthropic’s AI models for legitimate tasks. Nevertheless, Hegseth acknowledged the challenges of swiftly removing AI tools from military use, noting that Anthropic would continue to offer its services for up to six months to ensure a smooth transition.
In the wake of the rift with Anthropic, OpenAI has emerged as a potential alternative, with CEO Sam Altman confirming a partnership with the Pentagon to utilize the company’s tools, including the highly popular ChatGPT, within its classified network. This development could represent a significant shift in how military operations integrate artificial intelligence, emphasizing the need for reliable and patriotic technologies in defense strategies.
The evolving dynamics between the U.S. military and AI companies illustrate the complex intersection of technology, ethics, and national security, as both sides navigate the potential and pitfalls of artificial intelligence in combat scenarios.
