Who Decides What AI Should Do?
The most important question in technology isn't technical
Who decides what AI should do? Not the engineers writing the code. Not the politicians drafting the legislation. Not the philosophers publishing the papers. Right now, the answer is simpler and more uncomfortable than any of those: a handful of companies, moving faster than any institution can follow, making moral choices that affect billions of people.
And they're making profoundly different choices.
The moment it became real
In February 2026, Anthropic — the company behind Claude — walked away from a $200 million Pentagon contract. The Department of Defense wanted to use Claude for mass domestic surveillance without judicial oversight and for lethal autonomous weapons systems without human authorisation. Anthropic said no. Not "let's negotiate." Not "let's scope it differently." No.
Hours later, OpenAI signed the deal.
I've written about this in detail in LLMs to Langley, including what followed: Caitlin Kalinowski, OpenAI's robotics lead, resigning in protest. Nearly 900 employees across OpenAI and Google signing an open letter opposing the contract. Anthropic being designated a "supply chain risk" by parts of the US defence establishment for having the audacity to draw a line.
But this piece isn't about retelling that story. It's about what that story revealed.
Because what February made brutally clear is that the most important question in technology right now isn't technical. It's not about parameters or benchmarks or context windows. It's this: who decides what AI should do? And on what authority?
That contract moment wasn't a business decision. It was a moral one. Two companies, building comparable technology, looked at the same request and reached opposite conclusions. One saw a line that shouldn't be crossed. The other saw a contract worth signing.
And neither decision was made by voters, regulators, ethicists, or anyone who'll actually be affected by the consequences.
The governance gap
Here's the uncomfortable truth about AI governance in 2026: the gap between what AI can do and what anyone has agreed it should do is growing faster than any institution on earth can close it.
Governments are trying. The EU has the AI Act. The UK has its lighter-touch, principles-based approach. The US has swung between executive orders and deregulation depending on who's in office. But legislation moves in years. AI capabilities move in months. By the time a regulation is drafted, consulted on, amended, and enacted, the technology it was designed to govern has already evolved into something the drafters didn't anticipate.
This isn't a criticism of regulators. It's a structural problem. Democratic governance is slow by design — and that slowness is usually a feature, not a bug. You want deliberation. You want scrutiny. You want competing interests to be heard. But when the thing you're governing is accelerating exponentially, deliberation starts to look less like wisdom and more like paralysis.
So the decisions get made elsewhere. In boardrooms. In product meetings. In the quiet calculus of quarterly earnings calls. Not because companies are evil — most aren't — but because someone has to decide, and they're the ones with their hands on the controls.
The governance gap isn't about regulation being too slow. It's about the fact that when regulation is absent, corporate values become public policy by default. And corporate values, however well-intentioned, are accountable to shareholders, not citizens.
Who's actually at the table
If you mapped out the people and institutions shaping the future of AI ethics right now, the power imbalance would be staggering.
The tech companies sit at the centre. Anthropic, OpenAI, Google DeepMind, Meta, Microsoft — these organisations control the frontier models, the compute infrastructure, and increasingly the research talent. Their internal policies on safety, deployment, and acceptable use carry more practical weight than most government white papers. When Anthropic decides Claude won't help with bioweapons research, that's a governance decision. When OpenAI decides to take a defence contract, that's a governance decision too. These aren't just product choices. They're choices about what kind of future gets built.
Governments are the next ring out. The EU has moved furthest with the AI Act — the first comprehensive attempt to regulate AI by risk category, with actual enforcement mechanisms and real penalties. The UK has taken a softer, sector-led approach, trusting existing regulators to adapt. The US approach has been inconsistent, lurching between ambition and retreat depending on the administration. China has its own framework, focused more on social stability than individual rights. None of these frameworks talk to each other particularly well.
Civil society — the organisations that are supposed to represent the public interest — is underfunded and underrepresented. Groups like the Ada Lovelace Institute, the AI Now Institute, and Algorithm Watch do extraordinary work. But their combined budgets wouldn't cover a single quarter of OpenAI's lobbying spend. They publish reports. The companies ship products.
Academia is producing brilliant research that almost nobody in industry reads. The gap between what's being published in AI ethics journals and what's being implemented in production systems is vast. Geoffrey Hinton left Google specifically because he felt the academic voice wasn't being heard. He's spent the time since warning — with increasing urgency — that the people building these systems don't fully understand what they're building. He won a Nobel Prize and still struggles to shift the conversation.
The asymmetry is the point. The people with the most power to shape AI's trajectory have the least democratic accountability. The people with the most democratic accountability have the least power. And the people who'll be most affected by these decisions — ordinary citizens, workers, communities — aren't at the table at all.
The ethics frameworks — and why they're not enough
If you search for "AI ethics" right now, you'll find no shortage of frameworks. UNESCO has its Recommendation on the Ethics of AI. The EU has its seven requirements for trustworthy AI. The UK government has its five cross-cutting principles. The OECD has its AI Principles. Every major tech company has a responsible AI policy.
The frameworks mostly say the same things, because the principles aren't controversial. Fairness. Transparency. Accountability. Human oversight. Safety. Privacy. These are the "five principles" or "seven requirements" or "eight pillars" that keep appearing in every institutional document. And they're all correct. Nobody is arguing against fairness or transparency in the abstract.
The problem is that frameworks without enforcement are suggestions. And suggestions don't change behaviour when there's money on the table.
Consider transparency. Every framework says AI systems should be transparent. But what does that mean in practice? Does it mean publishing model weights? Disclosing training data? Explaining individual decisions? Providing algorithmic audits? The principle is clear. The implementation is a battlefield of competing interests, technical constraints, and commercial sensitivities.
Or take fairness. Everyone agrees AI shouldn't be biased. But biased against whom? By what measure? Tested how? And who decides when a system is "fair enough" to deploy? These aren't philosophical questions anymore. They're product decisions being made every day by people who may never have read a single ethics framework.
The EU AI Act is the first serious attempt to turn principles into law — with risk categories, compliance requirements, and penalties up to 7 per cent of global revenue. I've covered this in detail in my guide to the EU AI Act. It's imperfect. It's already struggling to keep pace with foundation models that didn't exist when the drafting began. But it's the first time any jurisdiction has said: these aren't guidelines, they're rules, and breaking them has consequences.
That matters. Because the distance between an ethics policy and an ethics practice is measured in enforcement.
Responsible AI in practice
In my experience, the gap between what organisations say about responsible AI and what they actually do is one of the widest in all of technology.
Most companies I've worked with have an ethics policy. It's usually a PDF somewhere on the intranet. It says the right things. It was probably written by legal, reviewed by comms, and approved by someone senior who genuinely cares. And it has almost no operational impact on how AI gets built and deployed day to day.
Responsible AI in practice — the real version, not the press release version — looks like this:
It looks like bias testing before deployment, not after complaints. It looks like regular algorithmic audits conducted by people who didn't build the system. It looks like documentation of training data, model limitations, and known failure modes. It looks like human oversight mechanisms that actually have the authority to stop a deployment, not just flag concerns into a backlog that never gets prioritised.
It looks like a business that builds governance before it builds features.
The organisations that take this seriously tend to share one trait: someone senior enough to say "slow down" who is actually listened to. Not a chief ethics officer with no budget and no authority. Someone with genuine power who understands that moving fast and breaking things is a catastrophic philosophy when the things you're breaking are people's lives, livelihoods, and civil liberties.
Most organisations aren't there yet. And honestly, the incentive structure doesn't help. Being responsible is slower and more expensive than not being responsible. The market doesn't reward caution. Investors don't celebrate the product you didn't ship because it failed a bias audit. The companies doing this well are doing it despite the incentives, not because of them.
The surveillance thread
There's a specific dimension of AI ethics that deserves its own attention, because it's where the abstract becomes visceral: surveillance.
AI doesn't just collect data. It understands it. That distinction matters more than almost anything else in this conversation. We've had mass data collection for decades — CCTV, phone records, internet metadata. What AI adds is mass comprehension. The ability to not just record a million faces but recognise them. Not just store a million conversations but interpret them. Not just track a million movements but predict the next one.
The Anthropic/Pentagon story is a surveillance story at its core. The contract wasn't just about defence applications in the abstract. It was about deploying language models for domestic surveillance without judicial oversight. The word "domestic" is doing a lot of work in that sentence. This wasn't about monitoring foreign adversaries. It was about pointing the machinery inward, at citizens.
When that kind of capability exists — and it does exist, right now, today — the question of who controls it becomes the only question that matters. Not whether the technology works. It works. Not whether it could be useful. Of course it could. But useful for whom, authorised by whom, overseen by whom, and accountable to whom?
These aren't hypothetical questions for a future we might avoid. They're present-tense questions about systems that are already being deployed.
What you can actually do
I'm not going to pretend this is simple. The forces shaping AI's trajectory are enormous — geopolitical, economic, technological. No individual is going to single-handedly fix AI governance by making better consumer choices.
But that doesn't mean you're powerless. It means you have to be deliberate.
If you're choosing AI tools — for yourself, your team, your organisation — the ethics of the company behind the tool should be part of your evaluation. Not the only part. But a part. I've written a comparison of Claude, ChatGPT, and Gemini that touches on this. Who you give your money to is a signal. It's not the loudest signal in the market, but it's one of the few you actually control.
If you're building with AI — and increasingly, most knowledge workers are — build governance before you build features. Ask what data you're training on and whether you have the right to use it. Ask what happens when the system gets it wrong. Ask who's checking for bias, and how often, and with what authority to act on what they find. Don't wait for regulation to tell you what's responsible. You already know.
If you're leading an organisation — close the gap between your ethics policy and your ethics practice. Make responsible AI someone's actual job, with actual authority and actual budget. Conduct audits. Be transparent about limitations. Don't deploy systems you don't understand into contexts where they can cause harm.
If you're a citizen — support regulation. Not because regulation is perfect, but because the alternative is leaving these decisions entirely to the market, and the market has already shown you what it does with that freedom. Engage with consultations. Write to your MP. Pay attention to which companies are making which choices, and let that inform where you spend your money and your attention.
The people building AI would prefer you didn't think about any of this. Not out of malice — out of convenience. Scrutiny is friction. Questions slow things down. And in an industry that worships speed, anything that slows things down feels like an obstacle.
But some obstacles are load-bearing.
The conversation that matters
The question isn't whether AI will be powerful. That's already settled. The question is whether that power will be directed wisely.
And "wisely" is a word that means different things to different people. It means something different to a defence minister than to a civil liberties lawyer. Something different to a shareholder than to a factory worker whose job just got automated. Something different to a parent than to an engineer.
Which is exactly why this can't be decided by any single group. Not by companies alone, however well-intentioned. Not by governments alone, however democratic. Not by academics alone, however brilliant. The decision about what AI should do — and shouldn't do — is too consequential to be left to anyone's unilateral judgement.
What we need is a genuine, messy, difficult, ongoing conversation. One where the people affected have as much voice as the people profiting. One where the question "should we?" is asked as seriously as "can we?" One where walking away from $200 million because you drew a line is seen as strength, not weakness.
We're nowhere near that conversation yet. But the fact that you've read this far suggests you think it matters.
It does. More than most people realise. And the window for having it — for shaping how this goes rather than just watching it happen — is not going to stay open forever.