Eight Companies Said Yes. One Said No.

The label Hegseth threatened them with had only been used on foreign adversaries before.

Share

Introduction

Pete Hegseth gave every major American AI company a choice: accept a contract permitting use for "all lawful purposes," or be designated a supply chain risk, a label the Pentagon had only ever applied to foreign adversaries. Eight signed; Anthropic refused. The signers include the company processing your email, the one answering most of your search queries, the cloud provider hosting roughly a third of the public internet, and the satellite operator carrying home connections in 150 countries.

The Companies, and What They Run in Your Day

On May 1, 2026, the U.S. Department of War announced agreements with eight frontier AI vendors to deploy their models on the Pentagon's most classified networks: IL6 (SECRET-level, connected to SIPRNET) and IL7 (Top Secret). The press release names them in the same breath: SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, Amazon Web Services, and Oracle. Oracle was added hours after the initial seven-company release. The Pentagon's framing is "diversity of supply"; the contract clause every signatory accepted permits use for "all lawful purposes" without restriction.

Look at what the parent companies operate in regular American life. Google runs Gmail's 1.8 billion accounts and answers more than 8.5 billion search queries every 24 hours. OpenAI's ChatGPT reached around 900 million weekly users by March 2026, more than double its 400 million figure a year earlier. Microsoft's Office 365 ecosystem sits on roughly 450 million commercial seats, with Copilot embedded inside. AWS hosts around 31% of global cloud infrastructure, so a share of the apps and websites you touched today routed through Amazon's servers. SpaceX's Starlink connects about 10 million customers across more than 150 countries, and after SpaceX's early-2026 acquisition of xAI, the same parent now owns Grok and X.

So here's what's actually happened. Eight companies that collectively run the email, search, cloud, and satellite infrastructure of daily American life agreed to deploy their AI for any purpose the Pentagon considers lawful. The congressional oversight law designed to catch exactly this didn't trigger, because "all lawful purposes" contract language achieves waiver-equivalent results without being a formal waiver.

What "All Lawful Purposes" Actually Permits

OpenAI posted its contract terms on February 28, 2026: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols." That phrase, "all lawful purposes," is the clause Hegseth demanded every vendor accept. Anthropic refused two specific items inside that scope: no mass domestic surveillance of Americans, and no fully autonomous weapon systems. The other eight accepted both uses.

I keep coming back to one detail in the Internet Governance Project's analysis of the dispute. Under existing U.S. law, the government can legally buy location data, browsing histories, and financial records from commercial brokers. Feed that data into a frontier AI for analysis and you have something that operates in practice as mass surveillance, which doesn't necessarily violate any current statute. Anthropic asked the Pentagon to add contract language prohibiting the bulk collection of Americans' publicly available information, and the Pentagon refused.

The Rule That Was Supposed to Prevent This

DoD Directive 3000.09, titled "Autonomy in Weapon Systems," has been the Pentagon's own rule on autonomous weapons since November 2012, and the CRS Insight walks through what it currently requires. The current version, updated January 25, 2023, defines autonomous weapon systems as ones that, "once activated, can select and engage targets without further intervention by a human operator." It requires "appropriate levels of human judgment over the use of force," meaning human involvement in deciding how, when, where, and why a weapon gets deployed. Senior Pentagon leadership has to sign off before any autonomous weapon enters development. In an "urgent military need," the Deputy Secretary of Defense can waive those requirements.

That waiver authority is exactly what Congress legislated around. The FY2026 NDAA, signed December 18, 2025, included Section 1061, a notification mandate requiring the Pentagon to tell congressional defense committees whenever it formally waives DoDD 3000.09. The report has to identify the systems covered, the rationale, and the duration. It was the third consecutive NDAA targeting the same oversight gap, after FY2024's §251 (notify within 30 days of any change to the directive) and FY2025's §1066 (annual reporting on lethal autonomous weapons through 2029).

The Loophole Congress Didn't Close

Section 1061 only triggers on a formal waiver of DoDD 3000.09. The "all lawful purposes" language signed by the eight companies isn't a waiver of the directive; it's a contract clause that imposes no autonomous-weapons restrictions on the vendor in the first place. Same practical end state as a waiver, no notification trigger, no notice to Congress.

The CRS Insight by Kelley Sayler confirms the dispute revolved around "all lawful purposes" and Anthropic's two refusals, and notes that DoD is not publicly known to be using any frontier model inside autonomous weapons systems today. The fight here is about what the contracts will permit later, not what's running today. Three consecutive NDAAs tried to legislate transparency around exactly this kind of expansion, and the May 1 deals are structured to fall outside every one of them because none involve a formal waiver.

Google's Contract Is the Sharpest Receipt

The clearest single piece of evidence about what these contracts actually do came from Fortune's reporting on Google. Charlie Bullock, on the U.S. Law and Policy team at LawAI, read the deal and laid it out plainly: "Under Google's terms, if there are technical safeguards within the models that prevent the government from doing something it wants to do, Google is obliged to step in and remove those safeguards." Bullock added that OpenAI's deal at least had "some kind of contractual guarantee" against being used for mass domestic surveillance, while Google's carries no such language.

More than 580 Google employees, including more than 20 directors and VPs, signed a letter to Sundar Pichai urging him to refuse classified Pentagon AI work outright. Alex Turner, a Google DeepMind research scientist, called the deal "shameful." Google had already cleared the path internally. In February 2025, the company removed its public AI principles pledge against using AI for weapons or surveillance, with Demis Hassabis citing "global competition taking place for AI leadership." A year later, when the Pentagon contract arrived, the policy framework no longer stood in the way.

The xAI Side Door

Senator Warren's letter to Hegseth on September 10, 2025, walks through how xAI ended up on the original July 2025 contract list alongside Anthropic, Google, and OpenAI. xAI hadn't been considered for a DoD contract before March 2025. A former Pentagon contracting official told Warren's office the xAI deal "came out of nowhere." The $200 million contract was awarded July 14, 2025, during the same window that Elon Musk's DOGE role gave him access to nonpublic federal contracting data. After SpaceX's early-2026 absorption of xAI, the May 1 announcement extends Musk's classified-network access through Starlink and Grok.

What Refusal Cost Anthropic

When Anthropic held the line on its two carve-outs, Hegseth set a deadline of 5:01 p.m. on February 27, 2026, and threatened to invoke the Federal Acquisition Supply Chain Security Act. The legal analysis from Mayer Brown confirmed Anthropic was designated a supply chain risk, a label "historically only been applied to foreign adversaries." Its $200 million contract was canceled and the company was blacklisted from Pentagon work, even though Claude was still being used on classified networks during U.S. military operations against Iran. Senators Markey and Van Hollen called the threats "an extraordinary and deeply alarming abuse of government power."

Who Benefits

The eight companies get money and market lock-in. The One Big Beautiful Bill Act, signed July 4, 2025, appropriated specific AI dollars to the Pentagon: $145 million for AI-powered aerial and naval attack systems, $500 million for the DoD AI ecosystem and Cyber Command, $450 million for AI in naval shipbuilding, $200 million for AI-accelerated DoD financial audits. The FY2027 budget request goes to $54.6 billion for the Defense Autonomous Warfare Group, up from $225 million the prior year, inside a $1.5 trillion total defense budget. CNN put it directly: "Tech companies have been jostling for that money."

Hegseth and the Department of War get unrestricted access plus a precedent. Vendor refusal now carries the foreign-adversary label as a consequence, and a senior DoD official told CNBC vendors aren't permitted to "insert themselves 'into the chain of command by restricting the lawful use of a critical capability.'" Hegseth called Amodei an "ideological lunatic" in Senate Armed Services testimony on April 30, 2026.

The Trump administration gets political cover. What was an embarrassing legal standoff with an American AI company during active military operations gets reframed as a story about supply diversity and AI-first readiness. Axios reported on April 29 that the White House is already drafting guidance to bring Anthropic back, on the administration's terms.

The Procurement Outpacing the Statute

Three NDAAs in a row tried to legislate oversight of autonomous weapons AI, and the Pentagon used contract language to step around all three. The mechanism is precise enough to be legible from the outside: an autonomous-weapons waiver triggers Section 1061 notification, and "all lawful purposes" triggers nothing because nothing is being waived. By the time DoDD 3000.09 might apply to a specific system, the contract clause has already settled the question.

Congress keeps legislating reactively against the Pentagon's last move while the procurement office drafts the next round of contracts. Senator Warren's Protecting AI and Cloud Competition in Defense Act addresses data rights but not the waiver-equivalent contract loophole. Nothing currently pending in Congress closes the gap that May 1 just walked through.

The Bottom Line

If you use Gmail, Google Search, ChatGPT, Office 365, AWS-hosted apps, Starlink, or any product the eight companies build, your AI provider is now a Pentagon contractor with no contractual right to refuse what the Department considers a lawful purpose. Anthropic asked for two carve-outs and got a foreign-adversary risk designation, a canceled $200 million contract, and a 5:01 p.m. deadline. Eight other companies looked at that and signed.

What happens at the next definitional fork is the open question. Section 1061 notification triggers on formal waivers, and the contract language sidestepped the trigger. If Congress writes a fix into the FY2027 NDAA, the Pentagon will already know which language the new statute targets and which alternative phrasing to use instead. The gap between what Congress can legislate and what the Pentagon can procure has been visible for three years, and May 1 was the day eight of the largest companies in American consumer tech put themselves on the procurement side of it.