LLMs to Langley
When the companies building intelligence started selling it to the people who surveil yours
On the morning of 28 February 2026, Anthropic — the company behind Claude — walked away from a two-hundred-million-dollar contract with the United States Department of Defense. By that evening, OpenAI had signed one.
The gap between those two events was measured in hours. The gap between what they represent could define the next decade of artificial intelligence.
This isn't a story about two companies. It's a story about a question that every person building, using, or affected by AI needs to sit with: what happens when the most powerful cognitive technology ever created gets pointed inward — at the citizens of the country that built it?
What actually happened
The facts, in order. They matter more than any commentary I can offer.
The Pentagon had been working with Anthropic under a contract worth roughly $200 million, deploying Claude on classified government networks. Anthropic agreed to the partnership but drew two lines it would not cross. First: no use of its models for mass domestic surveillance of American citizens without judicial oversight. Second: no deployment in lethal autonomous weapons systems without human authorisation.
These weren't radical positions. They were the bare minimum. The floor, not the ceiling.
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that if the company would not allow its models to be used "for all lawful purposes," the contract would be cancelled. More than that — Anthropic would be designated a "supply chain risk." That classification is normally reserved for companies connected to foreign adversaries. China. Russia. North Korea. Not a San Francisco AI lab that builds chatbots and writes poetry.
Amodei's response was unambiguous. "Threats do not change our position. We cannot in good conscience accede to their request."
So the Pentagon cut them off. And OpenAI stepped in.
A company was labelled a national security risk — the same designation applied to foreign adversaries — because it refused to let its AI be used for mass surveillance of American citizens. That sentence deserves to be read twice.
The speed of the thing
What struck me wasn't the deal itself. Governments have always wanted access to the sharpest tools available. That's not new. What struck me was the speed.
Hours. OpenAI secured the contract within hours of Anthropic's departure. Which means one of two things: either the deal was negotiated in advance, sitting in a drawer for the moment Anthropic said no — or it was assembled so hastily that the guardrails Anthropic spent months fighting for were treated as optional extras. Neither interpretation is comforting.
Sam Altman told employees in an internal memo that the company shared the same concerns as Anthropic. He later acknowledged the rollout appeared "opportunistic." That word — opportunistic — is doing a lot of heavy lifting. When the head of an AI company describes his own defence deal as opportunistic, something has gone sideways.
Within days, nearly 900 employees across OpenAI and Google signed an open letter. Not a leaked Slack message. Not an anonymous tip to a journalist. A letter, with names attached, calling on their employers to refuse government demands for domestic mass surveillance and autonomous weapons targeting. Its title was blunt: "We Will Not Be Divided."
They could see what was happening. The Pentagon was playing companies against each other. If you won't do it, someone else will. The oldest leverage play in procurement. It only works if nobody talks to each other. The employees decided to talk.
The resignation
Then, on 7 March, Caitlin Kalinowski resigned.
Kalinowski led OpenAI's robotics division. Before OpenAI, she spent years at Meta leading the Orion augmented reality project. Before that, Oculus. Before that, Apple, designing MacBook hardware. This is not a junior employee with an axe to grind. This is a senior technical leader who built physical things for some of the most demanding hardware companies on earth.
Her post on X was precise. Not angry. Not theatrical. Precise. Worth reading in full:
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together.
— Caitlin Kalinowski, 7 March 2026
In a follow-up, she clarified: "My issue is that the announcement was rushed without the guardrails defined. These are too important for deals or announcements to be rushed." Not anti-military. Not anti-OpenAI. Anti-recklessness.
When the person who leads your robotics division — the division building AI systems capable of physical action in the real world — resigns over how quickly you agreed to work with the military, that is not a personnel issue. That is a signal.
The voices that came before
Kalinowski is not the first person to draw this line. She joins a lineage of researchers and practitioners who looked at the trajectory and said, publicly, this is not where we should be going.
Sir Geoffrey Hinton — the man who, along with Yoshua Bengio, laid the mathematical foundations for backpropagation, the algorithm that makes deep learning work — left Google in 2023 specifically so he could speak freely about the risks. He has since warned, with increasing urgency, that autonomous weapons represent one of AI's most dangerous applications. Not in the abstract. In practice.
His argument is unsentimental. Lethal autonomous weapons make war cheaper for wealthy nations. They replace body bags with broken hardware. And when the political cost of conflict drops — when citizens stop seeing flag-draped coffins on the evening news — the threshold for starting wars drops with it. More conflicts. More interventions. Less accountability. The machine absorbs the moral cost, and the humans who gave the order never have to look at what it did.
The thing that stops rich countries invading poor countries is their citizens coming back in body bags.
— Sir Geoffrey Hinton
Hinton, Bengio, and over 200 researchers signed an open letter at the United Nations calling for international red lines on AI by the end of 2026. Among the proposed limits: no integration of AI into lethal autonomous weapons. No self-replicating AI without safeguards. No mass impersonation of humans.
These are not fringe positions held by people who don't understand the technology. These are the positions of the people who invented it. When the architects of a building tell you the foundations are cracking, the correct response is not to question their credentials.
The surveillance question
Mass domestic surveillance is not a hypothetical concern. It is a pattern with a history.
The United States government, through the NSA, conducted warrantless mass surveillance of its own citizens' phone records and internet communications for years before Edward Snowden's disclosures in 2013 made it public. The legal and political debates that followed resulted in some reforms — the USA FREEDOM Act of 2015, modifications to Section 702 of FISA — but the fundamental infrastructure remained. The capability was never dismantled. It was narrowed, oversight was tightened, and everyone moved on.
Now add large language models.
An LLM deployed on classified government networks doesn't just collect information. It understands it. It can read context, infer intent, identify patterns across millions of conversations simultaneously. It can do what no team of human analysts ever could: process the full firehose of domestic communications and flag anything that fits a pattern defined by someone you'll never meet, under criteria you'll never see, with oversight that may or may not exist depending on which interpretation of which statute a classified court chose to adopt on a particular Thursday.
That's not surveillance as we understood it in 2013. That's something qualitatively different. The difference between someone reading your post and someone reading your intent. An LLM can infer what you meant when what you wrote was something else entirely. It can identify subtext, sarcasm, coded language, emotional state. At scale. In real time.
The question is not whether governments should have intelligence capabilities. Of course they should. National security is real. Threats are real. The question is whether those capabilities should operate without judicial oversight — and whether a private AI company should be the one deciding where the line falls.
The robotics dimension
There's a reason Kalinowski's resignation carries more weight than a policy researcher's would. She led robotics. Physical AI. The part of the programme where intelligence stops being words on a screen and starts being a machine that moves through the world.
A language model that monitors communications is concerning. A language model that controls a drone is something else entirely. The convergence of large language models with physical systems — robots, autonomous vehicles, weapons platforms — is the step that takes this from a civil liberties debate to an existential one.
Spatial AI models can now perceive environments, make decisions, and take physical action. The same sim-to-real transfer that lets a warehouse robot learn to pick up boxes can, with different training data and different objectives, teach a drone to identify and engage targets. The underlying architecture is agnostic. It doesn't know the difference between a parcel and a person. It optimises for whatever objective function it was given.
When Hinton warns about autonomous weapons, he's not speculating about science fiction. He's describing a straightforward application of existing technology. The technical barriers have largely been cleared. What remains is policy — and policy, it turns out, moves considerably slower than procurement.
The corporate divergence
What makes this moment instructive is the contrast.
Anthropic said no. Not "no, but." Not "let's discuss the terms." No. And they were punished for it — designated a supply chain risk, cut off from government contracts, blacklisted from the defence establishment. The message was clear: refuse, and there are consequences.
OpenAI said yes. Quickly. And then, when the backlash arrived — from employees, from the public, from their own robotics lead walking out — they began clarifying, revising, explaining. Altman said the deal would include prohibitions on domestic surveillance and autonomous weapons. But those prohibitions were announced after the contract was signed. The guardrails came after the road was already built.
This pattern — move fast, deal with the ethics later — is the operating philosophy of Silicon Valley applied to national security. It works reasonably well when you're shipping a feature that might annoy some users. It works less well when you're deploying cognitive infrastructure inside the defence apparatus of a nuclear-armed superpower.
The two companies revealed different theories about what an AI company owes the world. One says: our obligation is to the technology's potential, and we trust institutions to use it responsibly. The other says: our obligation is to the guardrails, precisely because we don't trust institutions to impose them on their own.
History tends to favour the second theory.
What we can actually do
Here's the part I find difficult to write, because the honest answer is uncomfortable.
There isn't a lot that an individual can do to alter the trajectory of military AI procurement. The decisions happen in classified briefings, behind closed doors, between people with security clearances and budget authority. The democratic feedback loop operates on a timescale of years. The technology operates on a timescale of months.
But "not a lot" isn't "nothing."
Stay informed. Not in the doom-scrolling sense. In the structural sense. Understand the difference between mass surveillance and targeted intelligence. Understand what judicial oversight means and why it matters. Understand what lethal autonomy means — not in a film, in a procurement document. The gap between public understanding and technical reality is where bad policy hides.
Support the companies that draw lines. Anthropic walked away from $200 million and was punished for it. The market can reward what the government penalises. Use the tools built by companies whose values you want to see more of in the world. That's not brand loyalty. It's a vote.
Listen to the people who resign. Kalinowski didn't leave quietly. She explained why. So did Hinton when he left Google. So did the 900 employees who signed their names to a letter. Nobody advances their career by publicly disagreeing with their employer on matters of national security. When people take that risk, the least we can do is pay attention.
Talk about it. At dinner tables, in classrooms, in the conversations we have with our children about what AI is and what it's for. My kids ask me about AI regularly. They're curious, not frightened. But they need context that the technology itself won't provide. They need to understand that a tool can be used to help someone get out of bed in the morning or to watch them while they sleep — and that the difference between those two applications isn't technical. It's moral.
The line
Dario Amodei drew a line and was punished for it. Caitlin Kalinowski drew a line and walked away from her job. Geoffrey Hinton drew a line and spent his Nobel Prize acceptance speech warning about the thing he helped create.
The line is the same in every case. There are applications of this technology that are incompatible with the society we want to live in. Not because the technology is evil — because deploying it without safeguards, at speed, under pressure, in the service of goals that haven't been publicly debated, is how you build a world nobody asked for.
The LLMs are heading to Langley. Some of them are already there. The question isn't whether artificial intelligence will be used by governments — it will, it should, it must. The question is whether that use will be bounded by law, constrained by oversight, and guided by principles that someone is willing to lose money to defend.
Today, at least, one company was. And one person was willing to lose her job.
That's not enough. But it's not nothing.