February 27, 2026
Artificial intelligence firm Anthropic reportedly declined a request from the Pentagon to test its AI models, including Claude, for potential military applications. The company cited its “responsible scaling policy,” stating it could not fulfill the demands “in good conscience.”
This decision by Anthropic underscores ongoing ethical debates surrounding the deployment of advanced AI in defense contexts. The Pentagon has shown increasing interest in integrating commercial AI technologies into its strategies, with initiatives like the Joint Artificial Intelligence Center (JAIC) exploring such collaborations.
Anthropic maintains a public policy that restricts the use of its AI for applications such as autonomous weapons or those that could violate human rights. The company was co-founded by researchers who previously left OpenAI due to concerns regarding AI safety and its commercialization.