The Last Luddite
A defence of scepticism in the age of synthetic intelligence
Sajad Saleem
the mediocre generalist
My friend Paul — not his real name, but close enough that he'll know it's him and be mildly annoyed — refuses to use AI. Flatly. Categorically. With the quiet conviction of someone who has thought about it carefully and arrived at a position he's not interested in debating.
He's not a technophobe. He builds furniture by hand, runs a small business, does his own accounts on a spreadsheet he designed in 2011 and has never seen reason to update. He's on email and WhatsApp and uses Google Maps and all the usual modern concessions. But when it comes to AI — ChatGPT, image generators, voice assistants, any of it — he has drawn a line and he's standing behind it with both feet planted and his arms folded like a man who's been told the world is changing and has decided the world is wrong.
"It's not that I think it's evil," he told me last month, over a flat white he'd taken three minutes to order with the same deliberation he applies to everything. "It's that I don't trust what it does to people. The way they stop thinking for themselves. The way they hand over their judgment to a machine and don't even notice they're doing it."
I wanted to argue with him. I usually do. It's practically the structure of our friendship: I enthuse, he resists, we both finish our coffees and feel intellectually stimulated. But that evening, I paused. Because he's not wrong. Not entirely. And the fact that he's not entirely wrong is something the AI optimist community — a community I broadly belong to, though with increasing footnotes — doesn't reckon with nearly enough.
The original Luddites
A piece of history that's been thoroughly misrepresented, possibly because the victors write the history and the victors in this case owned the factories.
The Luddites weren't stupid. They weren't anti-technology. They were skilled textile workers in early 19th-century England — Nottinghamshire, Yorkshire, Lancashire. They could operate complex machinery. They understood technology intimately. Probably better than the factory owners who deployed it.
What they opposed wasn't the machines themselves. It was the way the machines were being used: to replace skilled workers with unskilled ones, to drive down wages, to concentrate wealth among owners while destroying the livelihoods of people who had spent years mastering a craft. They smashed machines the way a worker goes on strike — as a last resort when other avenues had been exhausted. They weren't luddites in the modern sense. That's the irony. The word betrays the people it was stolen from.
And the Luddites lost, of course. They always lose, historically speaking. The machines came, the old crafts died, and the world moved on. But "moved on" does a lot of heavy lifting in that sentence. It took decades for the benefits of industrialisation to reach the working class. Generations suffered in the interim — proper, bone-deep, Charles-Dickens-wasn't-exaggerating suffering.
I think about this often in the context of AI. The parallels are obvious. But the differences matter more: the Luddites had decades to adjust. We might have years. The loom took a generation to displace the weaver. The algorithm can do it over a long weekend.
The legitimate fears
The fears deserve to be taken seriously. Not dismissed, not hand-waved away with talk of "new jobs we can't imagine yet." That phrase has become the tech industry's equivalent of "thoughts and prayers." Sincere, perhaps. Useless, certainly.
Job displacement. Already happening. Not in the dramatic, science-fiction sense of robots marching through office corridors. In the quiet, undramatic sense of tasks being automated out from under people who built their careers on those tasks. Copywriters finding their clients now use ChatGPT. Illustrators watching commissions dry up. Junior lawyers whose research work is done by AI in minutes instead of hours. The list grows monthly, like a slow-moving flood that everyone can see but nobody's sandbagging against.
Here's the standard rebuttal: technology always creates more jobs than it destroys. Historically, that's true. But there's a subtlety the optimists gloss over: the transition period hurts. New jobs don't appear simultaneously with the destruction of old ones. They don't appear in the same places, or for the same people, or requiring the same skills. History remembers the destination. It forgets the journey.
The industrial revolution created enormous wealth. It also created child labour, slums, and working conditions so appalling that it took a century of reform to establish basic protections. "It'll work out in the long run" is cold comfort when the short run lasts a generation.
Creative devaluation. This one cuts close to home. I write. I've always written — badly at first, then less badly, then occasionally with something approaching competence, which is the most a writer should ever claim without inviting cosmic retribution. And I now live in a world where a machine can produce passable prose in seconds. Not great prose. Good enough. The two most dangerous words in the English language for anyone who makes things for a living.
When the supply of adequate writing becomes effectively infinite, the price drops to zero. That's just economics, and economics doesn't care about your feelings or your MFA. We fed the machine everything we'd made, and now it's making it back at us, cheaper and faster. There's a grim poetry in that.
My friend Sarah — also not her real name — is a freelance illustrator. She's brilliant. Two years ago, she was fully booked, turning down work. Last month she told me she's lost about forty per cent of her regular clients. Not because her work got worse. Because their budgets shrank, and the AI-generated alternatives got good enough. "Good enough" haunts creative professionals the way "automated" haunted factory workers.
Surveillance and control. AI systems are, by their nature, centralising technologies. They require enormous data, enormous compute, and specialised expertise. The most powerful AI systems are controlled by a very small number of very large companies — and, increasingly, a very small number of governments. The Panopticon used to be a thought experiment. Now it has an API.
None of this is hypothetical. Every one of these things is happening right now, today, in July 2025. While we argue about whether AI can write a decent sonnet.
The cost of opting out
So Paul has a point. Multiple points, all legitimate.
But there's a cost to his position. And it grows every day, compounding like interest on a debt you're trying to ignore.
The world is reorganising itself around AI. Not slowly, not tentatively — rapidly and with little sign of reversing. Doctors use AI for diagnostics. Lawyers for research. Teachers for lesson planning. Professional life is being rebuilt with AI at its core, and the rebuilding isn't waiting for everyone to feel comfortable about it.
To opt out entirely is to accept an increasing distance from how the world actually operates. It's like refusing email in 2005, or refusing a smartphone in 2015. You can do it. Nobody will stop you. But the world will quietly, persistently route around you. Opportunities will flow through channels you don't use. You'll be principled and correct and increasingly invisible. The loneliest kind of right.
And the tragedy of complete opt-out isn't missing a tool. It's losing your voice in the conversation about how that tool gets used. You can't steer a ship you refuse to board.
I told Paul this. He shrugged — he has a very eloquent shrug, the kind that contains entire philosophical positions. "Then the decisions will be made by people like you," he said. "And I trust you more than I trust the machines."
Which is a kind thing to say. It's also a terrifying amount of responsibility to place on people who are mostly just figuring it out as they go along. Which is all of us. Every single one.
What the sceptics protect
What I want the AI optimists — my people, broadly speaking — to understand about the sceptics:
They protect something essential. They protect the right to question. In a culture that increasingly treats technological adoption as a moral imperative — you must use AI or you'll be left behind — the sceptic is a reminder that speed is not wisdom. "Everyone's doing it" has never been a good argument for anything. It wasn't a good argument for smoking. It wasn't a good argument for subprime mortgages. And it isn't a good argument for this.
Scepticism and enthusiasm aren't opposites. They're the same energy pointed in different directions. The dangerous people are the ones who feel nothing — who adopt without thinking or refuse without caring. Paul cares. That's why he resists. The tech evangelists care. That's why they build. The real threat is the vast middle that shrugs and lets it happen.
What does the sceptic ask? Who made this? With what data? For whose benefit? At whose expense? These aren't anti-technology questions. They're pro-humanity questions. The kind that, historically, get asked too late — after the factory's been built, after the river's been polluted, after the community's been displaced. The sceptic asks them early, when there's still time to influence the outcome.
History is littered with innovations that would have benefited from more scepticism earlier. Leaded petrol. Asbestos. Thalidomide. Social media's effect on adolescent mental health. In each case, the sceptics were dismissed as fearmongers, enemies of progress. In each case, they turned out to be right about the things that mattered most.
Scepticism isn't the enemy of progress. It's the immune system. Without it, progress becomes growth without direction, speed without wisdom, capability without conscience. There's a difference between a brake and an anchor, and anyone who can't tell them apart shouldn't be driving.
The dinner table test
I come back, as I always do, to my children. They're growing up in the middle of this, and their range of attitudes roughly mirrors the adult population — which suggests that the adult population's positions might be less about rational analysis and more about temperament than any of us would care to admit.
At the dinner table, they ask me questions I can't fully answer. Will AI take my job? Is AI alive? Should I use AI for my homework? My best isn't brilliant. But the questions force me to think honestly about what I actually believe, as opposed to what I say I believe in essays like this one. Children are brutal epistemological auditors. They can smell false certainty from across the room.
What I'd tell the last Luddite
If I could sit down with the last genuine sceptic — the last person standing firm against the tide, arms folded, coffee going cold — I'd say this:
You're not wrong to worry. Job displacement is real. Creative devaluation is real. Surveillance is real. Concentration of power is real. Anyone who tells you otherwise is selling something. Probably a subscription.
But here's where I land, after all the weighing and balancing and trying to be fair to every side: the sceptics are right about the dangers and wrong about the remedy. Opting out isn't resistance — it's surrender dressed as principle. The only way to shape the future is from inside it. I don't say this lightly. I've sat with it through watching Sarah lose clients and listening to Paul's eloquent shrugs. The dangers are real. Walking away doesn't make them less real. It just means someone else decides how they get addressed.
The Luddites lost because they fought the machines. They might have achieved more fighting for better terms. For worker protections. For a share of the productivity gains. For a say in how deployment happened and who it served. The fight was never really against the loom. It was for the weaver.
The best AI sceptic isn't the one who refuses to use AI. It's the one who uses it carefully, questions it constantly, and never lets the convenience of the tool obscure the cost of the system. The sceptic's role isn't to stop progress. It's to make progress honest
A final thought for the unconvinced
Every generation thinks they've invented something new. And every generation is surprised when the old problems show up wearing new clothes. The problems AI presents — displacement, inequality, the tension between efficiency and dignity, the question of who controls the means of production — are the oldest problems in human history. We keep thinking this time is different. It is different. It's also the same. Both things are true, and the inability to hold both simultaneously is the source of most bad AI commentary.
What's new is the speed. What's new is the scale. What's new is the power of the technology and the narrowness of the window in which we can still shape its trajectory. The concrete is wet. It won't be wet for long.
So to Paul, and to every sceptic reading this with one eyebrow raised and both arms folded: your voice matters. Not despite your scepticism, but because of it. The future of AI will be determined by the people who show up to the conversation. All of them. The enthusiasts and the sceptics. The builders and the questioners.
The loom is already built. The question — the only question that matters now — is who it weaves for.
It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change. But adaptation without conscience is just a more sophisticated form of surrender.
— The mediocre generalist, arguing with himself at the dinner table