More than 30 OpenAI and Google DeepMind employees filed a statement supporting Anthropic's lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply-chain risk. The rare show of unity among competing AI companies signals deep industry concern about government overreach into AI governance.
An Unprecedented Alliance
"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry," reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean.
The amicus brief represents a remarkable moment: employees of OpenAI and Google, companies that compete fiercely with Anthropic, publicly defending their rival against government action.
What Happened
Late last week, the Pentagon labeled Anthropic a supply-chain risk—a designation usually reserved for foreign adversaries like China and Russia. The reason: Anthropic refused to allow the Department of Defense to use its technology for mass surveillance of Americans or autonomously firing weapons.
The DOD argued that it should be able to use AI for any "lawful" purpose and not be constrained by a private contractor's ethical guidelines.
The Industry Response
The Google and OpenAI employees made a pointed argument: if the Pentagon was "no longer satisfied with the agreed-upon terms of its contract with Anthropic," the agency could have "simply canceled the contract and purchased the services of another leading AI company."
Instead, the DOD signed a deal with OpenAI within moments of designating Anthropic a supply-chain risk—a move many ChatGPT maker's employees protested.
Why This Matters
The brief affirms that Anthropic's stated red lines are legitimate concerns warranting strong guardrails. Without public law to govern AI use, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse.
"If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness," the brief reads. "And it will chill open deliberation in our field about the risks and benefits of today's AI systems."
The Stakes
This case represents a watershed moment for AI governance. The outcome will determine whether the government can compel AI companies to abandon their safety guidelines, or whether developers retain the right to set ethical boundaries on how their technology is used.
Many employees who signed the statement also signed open letters urging the DOD to withdraw the label and calling on their own companies' leaders to refuse unilateral use of AI systems for military purposes without ethical constraints.
The Bottom Line
For the first time, competing AI companies have found common ground: the belief that ethical AI development requires the right to say no—even to the Pentagon.
