The NSA Said Grok Was Risky. The Pentagon Signed Anyway.

In late February 2026, the Pentagon finalized a deal giving xAI access to classified military networks. The timing drew scrutiny almost immediately: the agreement came shortly after Anthropic had declined to sign a similar arrangement, citing insufficient guarantees about how its models would be used. The Pentagon, apparently not short on AI vendors willing to sign, turned to xAI and OpenAI instead.

What made the xAI agreement particularly unusual was what reportedly happened before the ink dried. The National Security Agency had conducted a classified review and determined that Grok had specific security concerns that other AI systems did not share, according to reporting from NBC News. Those concerns did not stop the deployment from proceeding.

On March 16, Senator Elizabeth Warren sent a letter to Defense Secretary Pete Hegseth pressing for details on the agreement and demanding a response by March 27. "I am concerned that Grok's apparent lack of adequate guardrails could pose serious risks to the safety of U.S. military personnel and to the cybersecurity of classified systems," she wrote.

What Warren Is Asking

Warren's letter requests the full text of the Pentagon-xAI agreement, all communications that led to it, and specifics on how the department plans to manage the risks she outlined. Her stated concerns are concrete: Grok could leak classified information to adversaries, could be manipulated through biased or inaccurate training data, and may lack the safety controls needed to prevent harm to service members.

She also wants documentation on red-team testing, third-party audits, incident response protocols, and the authorization-to-operate basis that allowed Grok onto secure systems.

The concern is not theoretical. Grok's track record over the past year has been difficult: the model generated explicit images from real photographs without consent (including those of minors), produced antisemitic content after a July 2025 technical update, and, according to Warren's letter, provided advice on how to commit murders and terrorist attacks. Each incident prompted public attention, but none appears to have affected the Pentagon procurement.

The Responsible AI Departure

Adding to the picture: the Department of Defense's Chief of Responsible AI reportedly circulated internal memos warning about Grok's safety profile and received little traction before eventually stepping down. The specific timing of the resignation and whether it was formally connected to the Grok deployment have not been publicly confirmed, but the sequence has drawn attention from lawmakers and civil society organizations.

A coalition of nonprofits separately urged the government to suspend Grok's use across federal agencies in February, citing the same pattern of harmful outputs. A class action lawsuit was filed against xAI on March 16, the same day Warren's letter was released, alleging that Grok generated sexual content from real images of minor plaintiffs.

How the Deal Came Together

xAI is not new to the Department of Defense. In July 2025, the company received a roughly $200 million Pentagon contract to develop an AI application for military use. The classified network agreement represents a significant expansion of that relationship.

The broader context involves a procurement environment under significant pressure. Across the previous year, the DoD had been accelerating its AI adoption under an administration that treated speed of deployment as a strategic priority. When Anthropic insisted on contractual guarantees that its models would not be used for domestic surveillance or direct application in lethal weapons systems, the Pentagon walked away from that negotiation. The full account of that dispute illustrates how hard-line safeguard requirements can become a dealbreaker in defense AI procurement.

That context matters for understanding the Grok agreement. The NSA's classified review reportedly identified concerns specific to Grok, but those concerns apparently did not rise to the level that would pause a deal the Pentagon wanted to close.

The Oversight Gap

Warren's letter is, at its core, a congressional oversight action. It does not have the force of law, but it creates a formal record. If the Pentagon responds fully, Congress will have an unusually detailed picture of how the agreement was structured and what safeguards, if any, were required. If the response is incomplete or the deadline is missed, that becomes its own kind of signal.

What is clear even without a response: the standard for AI deployment on classified networks is still being worked out in real time. There is no established federal framework that defines what security review a large language model must pass before accessing sensitive government systems. The NSA review that flagged Grok apparently happened, but it did not produce a formal pause or public requirement for remediation.

For anyone following how AI models are being integrated into government infrastructure, this is worth watching. The decisions being made now, under procurement pressure and without settled policy, will define what "adequate safeguards" means for the next several years. The security risks inherent in how AI models process and retrieve information are not theoretical. They are structural, and they become more consequential when the information in question is classified.

The March 27 deadline Warren set gives the Pentagon less than two weeks to respond. Whether it does, and what it says, will determine the next phase of this.