The Pentagon announced on Friday that it has finalized agreements with seven technology firms to integrate their artificial intelligence tools into its classified networks, enabling the military to leverage AI-driven capabilities to support combat operations.
Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX will contribute their resources to “enhance warfighter decision-making in complex operational environments,” according to the Department of Defense.
Notably missing from the list is AI firm Anthropic, following its public conflict and legal battle with the Trump administration over ethical and safety concerns regarding AI use in warfare.
In recent years, the Defense Department has significantly accelerated its adoption of AI. As reported by the Brennan Center for Justice in March, the technology can shorten the time needed to identify and engage battlefield targets, while also streamlining weapons maintenance and supply chain logistics.
However, AI’s military application has sparked worries about potential invasions of Americans’ privacy and the risk of machines autonomously selecting targets. One company involved in the Pentagon contracts emphasized that its agreement mandates human oversight in specific scenarios.
Concerns intensified during Israel’s military campaigns against militants in Gaza and Lebanon, where U.S. tech giants reportedly assisted Israeli forces in tracking targets. Yet the sharp rise in civilian casualties has raised alarms that these AI tools may have contributed to unintended deaths.
Debates over military AI use remain unresolved
Helen Toner, interim executive director at Georgetown University’s Center for Security and Emerging Technology, noted that these new Pentagon contracts emerge amid growing unease about over-reliance on AI in combat settings.
“Much of modern warfare involves personnel in command centers monitoring screens and making high-stakes decisions amid chaotic, rapidly evolving situations,” said Toner, a former OpenAI board member. “AI can assist by summarizing data or analyzing surveillance footage to flag potential threats.”
Still, she stressed that key questions persist around appropriate human involvement, risk management, and operator training.
“How do you deploy these tools quickly enough to gain a strategic edge,” Toner asked, “while ensuring operators are properly trained, understand the technology’s limitations, and avoid placing blind trust in it?”
Anthropic had raised similar concerns, seeking contractual guarantees that its AI would not be used in fully autonomous weapons or for domestic surveillance of U.S. citizens. Defense Secretary Pete Hegseth responded that the company must permit any use the Pentagon considers lawful.
Anthropic filed suit after President Donald Trump, a Republican, moved to ban all federal agencies from using its chatbot Claude, and Hegseth moved to designate the company a “supply chain risk”—a label intended to guard against foreign interference in national security systems.
OpenAI had previously struck a deal with the Pentagon in March to effectively replace Anthropic by deploying ChatGPT in classified settings. On Friday, OpenAI confirmed that the newly announced agreement is the same one it revealed earlier this year.
“As we stated when we first announced our partnership months ago, we believe those defending the United States deserve access to the world’s most advanced tools,” the company said.
According to a source familiar with the matter but not authorized to speak publicly, one company’s contract includes provisions requiring human oversight whenever AI systems operate autonomously or semi-autonomously. The agreement also stipulates that AI tools must be used in ways that uphold constitutional rights and civil liberties.
These conditions mirror the very issues that led to Anthropic’s withdrawal—though OpenAI has previously claimed it secured comparable safeguards in its own Pentagon deal.
The Pentagon’s perspective
Emil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that relying on a single AI provider would have been imprudent—an implicit reference to the rift with Anthropic.
“When we realized one partner wasn’t willing to collaborate in the way we needed, we proactively secured multiple providers,” Michael explained.
Some firms, like Amazon and Microsoft, have longstanding relationships with the military in classified contexts, and it’s unclear whether the new deals substantially change those arrangements. Others, such as chipmaker Nvidia and AI startup Reflection, are newcomers to this space. Both develop open-source AI models, which Michael has prioritized as part of an effort to offer an “American alternative” to China’s rapidly advancing open AI ecosystem.
The Pentagon confirmed Friday that military personnel are already using AI capabilities through its official platform, GenAI.mil.
“Warfighters, civilians, and contractors are actively applying these tools in real-world tasks, reducing processes that once took months to just days,” the department stated. It added that expanding AI capabilities will “equip service members with the confidence and tools necessary to protect the nation from any threat.”
According to Toner of Georgetown University, the military often uses AI much like the private sector—to automate repetitive tasks that would otherwise consume significant human time.
For example, AI can predict when a helicopter requires maintenance, optimize the movement of troops and equipment, or help distinguish between civilian and military vehicles in drone surveillance feeds.
Still, she cautioned against excessive dependence.
“There’s a well-documented phenomenon called automation bias, where people tend to overestimate machine reliability,” Toner warned.
Related: Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge Attacks
Related: The Mythos Moment: Enterprises Must Fight Agents with Agents
Related: Claude Mythos Finds 271 Firefox Vulnerabilities
Related: OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal



