Teaching My Children About Tomorrow
Dinner table conversations about AI, fear, hope, and what to learn when knowledge is free
Sajad Saleem
the mediocre generalist
My eight-year-old asked me last Tuesday, over fish fingers and peas — the peas largely untouched, arranged in a small defensive formation at the edge of her plate — whether robots would be her friend.
Not whether they'd take her job. Not whether they'd be dangerous. Not any of the anxious, adult questions that dominate the op-ed pages and the conference circuits and the late-night doomscrolling sessions. Just: would they be her friend?
I put my fork down. One of those moments where you realise the question deserves more than a throwaway answer.
"Maybe," I said. "But not the same kind of friend as a person."
She thought about this for a moment, twirling a fish finger like a philosopher's baton. "Like an imaginary friend, but real?"
I haven't been able to get that formulation out of my head since. Like an imaginary friend, but real. An eight-year-old's accidental philosophy, clearer than most of the AI ethics papers I've read this year, and considerably shorter.
The dinner table as classroom
We talk about AI at the dinner table in our house. Not every night — sometimes we talk about football, or what happened at school, or whether pineapple belongs on pizza. It does not. This is the one topic on which I accept no dissent, entertain no debate, and brook no appeal. Some questions are settled. But AI comes up regularly, naturally, because it's in the air now the way the internet was when I was growing up, and pretending it isn't there would be stranger than talking about it.
I have children aged eight to eighteen. That range — a full decade of cognitive and emotional development — means I'm essentially having five different conversations about the same subject, simultaneously, at the same table, while also trying to make sure nobody spills their water. The youngest wants to know if robots have feelings. The middle ones want to know how AI "thinks" and whether it's genuinely thinking or just doing a very good impression. One of them wants to know if her GCSE revision is pointless now that machines can answer any exam question better than she ever will. And the eldest, standing at the edge of university and adulthood, wants to know — needs to know, with a quiet urgency that tightens something in my chest — whether the career he's planning for will still exist by the time he qualifies.
Each of these is a fair question. Each deserves an honest answer. And the honest answer, in every case, begins with the same three words: I don't know.
Three words that are surprisingly difficult to say when your children are looking at you and you're supposed to be the one with answers. Parenthood is a long exercise in discovering how few answers you actually have.
The eight-year-old: innocence meeting technology
My youngest has grown up in a world where voice assistants are furniture. She's never known a house without Alexa. She talks to it the way I talked to the family dog growing up: casually, fondly, with no expectation of deep conversation but with a real sort of companionship. She once thanked Alexa for playing a song, unprompted, which either means I'm raising her right or she's going to be deeply confused when the machines don't thank her back.
When she sees a humanoid robot in a video — Figure, or Optimus, or one of the Unitree demos that circulate on social media — she doesn't have the same reaction I do. I see engineering. I see implications. I see the labour market, the ethics, the geopolitics, the whole tangled web of adult anxieties.
She sees a character. She wants to know its name. She wants to know if it gets tired. She wants to know what it eats. Nothing, I tell her. It runs on electricity. "Like my tablet?" Yes, exactly like your tablet. She nods, satisfied. The universe is orderly. Can she have pudding now?
My eight-year-old doesn't worry about AI because she hasn't yet learned that she's supposed to. I'm not sure she's wrong.
My job, as her father, isn't to make her afraid of robots. It's to make sure she's grounded in the things that make her human — so that when the robots are everywhere, and they will be in her lifetime, she navigates that world with her humanity intact. She doesn't need to understand AI. She needs to understand people. Empathy, kindness, fairness, the ability to read a room and sense when someone is sad. These are the skills that will matter in a world shared with machines. The machines won't have them. The world will still need them.
We draw robots together on rainy Saturday afternoons. She gives them names and backstories and favourite colours. I don't correct the anthropomorphism. She'll develop a more sophisticated understanding as she grows. For now, the act of imagining — of being creative, playful, generous in her interpretation of the world — is the education itself. You can't teach creativity. You can only fail to destroy it.
The twelve-year-old: the how-does-it-work phase
My twelve-year-old son is in the phase where everything needs a mechanism. How does a car engine work? How does Wi-Fi work? How does the brain work? Why is the sky blue? No, but why is it blue? He is, in short, exhausting in the best possible way.
So when he asks how AI "thinks," I can't get away with a vague wave toward the concept of intelligence. He wants to know about training data and neural networks and weights and probability distributions. Not in rigorous mathematical detail — he's twelve, not a PhD candidate — but in the kind of concrete, mechanistic terms that let him build a mental model he can poke at.
I explain it roughly like this: imagine you've read every book ever written. Every conversation, every Wikipedia article, every bit of code, every recipe, every love letter, every angry comment on a football forum. And from all that reading, you've developed incredibly good pattern recognition — you can predict, given any sequence of words, what word is likely to come next. Not because you understand the meaning, but because you've seen the patterns so many times that you can reproduce them with high accuracy.
"So it's not really thinking," he says. "It's just... pattern matching?"
"Very sophisticated pattern matching," I say. "So sophisticated that the output is often indistinguishable from thinking. Which raises an interesting question about whether there's actually a difference."
He goes quiet for a bit. Then: "But it doesn't know things. It just... predicts things."
This kid is going to be fine.
His challenge is different from the eight-year-old's. It's not about accepting technology — he's already there, building Minecraft mods with ChatGPT and treating AI as casually as I treated a calculator. His challenge is developing critical thinking about technology. The ability to use AI tools effectively while maintaining a clear-eyed understanding of what they are and aren't. Awe and scepticism can coexist. In fact, they must.
What worries me about his generation is something I think of as "AI credulity" — a tendency to trust machine output because it's fluent and confident, without applying the same scepticism they'd apply to a human making identical claims. The most dangerous thing about AI isn't that it's wrong — it's that it's wrong with perfect grammar and total confidence. Teaching children to question fluent nonsense is now a survival skill, and it's not on any curriculum I've seen
The fifteen-year-old: anxiety about the future
My fifteen-year-old came home from school last month and announced, with the weary fatalism that only a fifteen-year-old can properly execute: "There's literally no point in revising for my GCSEs. AI can do all of this better than I ever will."
She's not entirely wrong. A language model can answer most GCSE-level questions across most subjects with near-perfect accuracy. The factual, examinable content of a secondary school education is, in a real sense, commoditised. My fifteen-year-old noticed. The Department for Education, as far as I can tell, has not.
But she's missing something crucial, and it's the thing I most want her to understand.
The point of education was never just the facts. It was the development of a mind. The process of wrestling with a difficult concept until it clicks — that click, that moment of understanding, rewires something in your brain that no amount of reading someone else's answer can replicate. The slow, frustrating, irreplaceable work of building the cognitive infrastructure that allows you to think — not retrieve information, but actually think, critically and creatively, with the kind of depth that comes only from struggle. Knowledge is free now. Wisdom never will be.
What can never be free is the capacity to use knowledge wisely. Judgement. Taste. The ability to distinguish between what's true and what merely sounds true. Between what matters and what merely seems urgent. These are human skills, and they are built through the hard, unglamorous, occasionally tedious work of learning. There are no shortcuts. There are only people who haven't discovered that yet.
I told her this, or a version of it, and she rolled her eyes with the practised precision of someone who has been rolling her eyes at parental wisdom for approximately three years and has got very good at it. So I tried a different approach.
"You're right that AI can answer exam questions," I said. "So the exams will have to change. And they will. But the people who'll design the new exams, the new curricula, the new ways of assessing what humans can actually do — they'll need to understand both the technology and the humans. People who can think across boundaries."
She considered this. "So I should revise, but also learn about AI?"
"You should revise, and think about AI, and read widely, and argue with people, and write things that aren't for marks, and stay curious about everything. The revision is the least important part of your education. But it's not worthless. It's practice for your brain. Like scales for a pianist."
"You can't play piano, Dad."
"Which rather proves the point about practice."
She almost smiled. She went upstairs and opened her textbook. I'll take the win.
The eighteen-year-old: standing at the edge
And then there's my eldest. Eighteen. About to go to university. Studying — he hopes — computer science. Which means he's choosing to enter the field that AI is most visibly disrupting, and he knows it, and the awareness sits on him the way weather sits on a hill.
"Dad," he said to me one evening — it was a Thursday, about half nine, and we were washing up together, which is when the real conversations happen in our house, over soapy water and the clink of plates — "will there be software developers in ten years?"
I dried a mug. Put it in the cupboard. Took my time, because the question deserved it.
Honest answer: I think so, but different ones. The role will change. It's already changing. The grunt work — the boilerplate, the routine CRUD applications, the cookie-cutter websites — that's being automated now, today, in real time. What remains is the hard stuff. The architectural thinking. The system design. The ability to understand what a client actually needs, which is rarely what they say they need. The judgement calls about security, scalability, ethics, trade-offs.
I told him something I believe, even though I hold it lightly: the last person standing in any profession will be the one who can do the thing the machine can't. And the thing the machine can't do — not yet, maybe not ever — is care. Care about the user. Care about the consequences. Care about getting it right not because the tests pass but because a real person on the other end is depending on the system working at the worst possible moment.
"Learn the fundamentals," I told him. "Not because the specific languages matter — they'll change, they always do — but because the fundamentals teach you how to think about computation. And then learn everything else you can. Business. Psychology. Design. Ethics. History. The broader you are, the harder you are to replace, because you're not competing with AI on technical execution. You're competing on being a complete human who happens to be able to code. Nobody automates a person. They automate a task."
He nodded. I don't know if he was reassured. I'm not sure I'm reassured. But I've noticed something about my son that gives me hope: he's already adapting. He uses AI tools naturally, confidently, critically. He's building with them, not just consuming from them. Asking the right questions is, as I keep telling my children until they groan, ninety per cent of everything.
What schools aren't teaching
I need to say something about education, and I need to say it bluntly.
Most school curricula, in the UK and elsewhere, are about twenty years behind the world they're preparing children for. This was a problem before AI. It's now a crisis. Slow-motion, politely administered, committee-approved — but a crisis nonetheless.
We're still teaching children to memorise facts in an age when facts are free. Still assessing them primarily through written examinations in an age when writing fluent text is something a machine can do in seconds. Still organising knowledge into rigid subject silos in an age when the most useful thinking happens at the intersections. We have, in essence, built an education system optimised for producing the one thing that AI does better than any human: storing and retrieving information.
Children don't need to be prepared for the future. They need the confidence that they can participate in shaping it. That reframing matters — it shifts the posture from bracing for impact to rolling up sleeves.
What matters now — and will matter more in five years, ten years, twenty:
Taste. The ability to distinguish good from bad, elegant from merely functional. AI can generate a thousand options in the time it takes you to finish your chai. The human's job is to choose the right one. You develop it by spending time with the best things humans have made and letting them calibrate your sense of quality.
Judgement. Making decisions under uncertainty with incomplete information. Built through experience, through being given real problems with real stakes and being trusted to navigate them. You can't learn judgement from a textbook any more than you can learn to swim from a diagram.
Empathy. Understanding what another person is feeling and acting on that understanding with care. In the domains where empathy matters most — healthcare, education, the quiet conversation with a frightened child at two in the morning — the difference between real empathy and simulated empathy is the difference between a hand on your shoulder and a message on a screen. Both acknowledge your pain. Only one of them helps.
Curiosity. The drive to learn not because you have to but because you want to. Curiosity is the only credential that never expires — the skill that makes all other skills renewable.
If I could redesign the curriculum from scratch, I'd build it around those four things. Everything else — the specific knowledge, the technical skills, the domain expertise — can be acquired on demand, by anyone who has the foundational human capabilities to direct the learning. Teach children how to think and they'll figure out what to think about. Teach them what to think and you've already lost. That distinction is the blind spot of most education reform I've seen proposed in the last decade.
The tension
There's a tension in all of this that I haven't resolved, and I'd rather be honest about that than pretend I've got it figured out.
On one hand, I want to protect my children from anxiety. The eight-year-old doesn't need to know about labour market disruption. Even the fifteen-year-old deserves a childhood that isn't dominated by existential dread about the singularity.
On the other hand, I want to be honest with them. Because dishonesty — even well-intentioned dishonesty, even the "don't worry about it, everything will be fine" variety — is its own kind of harm.
I've stopped telling my children that everything will be fine. Not because I'm pessimistic — because "everything will be fine" is a thing you say to end a conversation, not to help someone. What I tell them instead is: you will have more tools to shape the world than most generations before you. That's not comfort. That's responsibility. And responsibility is better than comfort, because comfort makes you passive and responsibility makes you move.
The future isn't something that happens to you. It's something you participate in building.
The generations that lived through major technological transitions — electrification, the telephone, the internet — adapted. The ones who adapted best weren't the ones who predicted what was coming. They were the ones who were flexible, curious, and stubborn enough to keep learning. The kids growing up right now, inside this change, surrounded by it from birth, have that same chance. Not despite the uncertainty. Because of it.
A Saturday afternoon
Last Saturday. Raining, because it's Britain and it's October and the sun is essentially theoretical at this point. All five kids were home, which is increasingly rare.
The youngest was drawing robots at the kitchen table, tongue poking out in concentration. One of the boys was getting ChatGPT to help him build a Minecraft mod — successfully, it turned out, because the kid is shipping software now, which is both impressive and a pointed commentary on my own career trajectory. One of the girls was supposedly revising but actually watching a YouTube video about how AI is changing medicine, which I'm counting as education because the alternative is an argument I won't win. And the eldest was at his laptop, building a web application — his university portfolio piece — with an AI coding assistant that was, by his account, "like having a senior developer looking over your shoulder, except it doesn't sigh when you make a stupid mistake."
Five children. Five different relationships with AI. All in the same kitchen, on the same rainy Saturday, navigating the same moment with the casual ease of people who've never known anything different.
I stood in the doorway and watched them for a minute.
They're going to be fine. Not because the future is certain. Not because I have the answers. But because they're curious, and kind, and more adaptable than they know. And because they have each other. And because they have a dad who is trying — imperfectly, sometimes clumsily, always with more love than he knows how to express tidily — to prepare them for a world he can't fully imagine.
The best thing I can give my children isn't a prediction about the future. It's the confidence to face it without flinching. The curiosity to explore it without cynicism. The empathy to shape it in ways that include everyone, not just the people who look and think and live like us.
The rain stopped eventually. It always does, even in October, even in Britain, even when it feels like it won't.
The youngest showed me her robot drawing. She'd given it a top hat and a bow tie, because apparently the future is formal.
"Does this robot have a name?" I asked.
"Gerald," she said, without hesitation. "He's a helper robot. He helps people who are sad."
I looked at this kid — eight years old, peas still uneaten, entirely unbothered by the fact that the world is remaking itself around her — and I thought: if the future has people like you in it, it'll be all right.
The best way to predict the future is to invent it.
— Alan Kay
I can't invent the future. But I'm raising five people who might. And on the good days — the rainy Saturdays, the dinner table conversations that go sideways into philosophy, the small moments where a child says something so precisely true that it stops you mid-step — I think that might be enough.
She put down her crayon. "Dad, do you think Gerald would like our house?"
"I think he'd love it," I said.
She smiled. And she went back to drawing.