Steel & Sinew|22 January 2026|14 min read

The Year the Robots Stood Up

2025-2026 and the rise of embodied intelligence

SS

Sajad Saleem

the mediocre generalist

Eighteen months ago, a robot fell over at a trade show in Las Vegas and two engineers rushed to help it up. Last week, a robot caught a falling box on a BMW production line before a human worker had finished flinching. Somewhere between those two moments, something shifted.

The robots stood up.

Not metaphorically. Literally. Humanoid robots — bipedal, dexterous, increasingly autonomous machines built in our image — went from research curiosities and viral YouTube clips to commercial products shipping to actual warehouses. Figure's 02 walked onto a BMW production line and started doing useful work. Tesla's Optimus folded laundry on stage and, for once, it wasn't theatre — units were rolling off a modified assembly line in Fremont by Q4 2025. And Unitree's H1, that eerily fast Chinese humanoid that could do backflips, got a price tag that made procurement departments across three continents reach for their chequebooks.

Colleague. That word deserves a moment. We went from watching robots on screens to sharing physical space with them. That adjustment happened in roughly the time it takes to pay off a sofa on credit.

The convergence nobody quite prepared for

We spent decades debating whether artificial general intelligence would arrive in 2030 or 2050 or never. Entire careers were built on those predictions. But while everyone was arguing about when a machine would pass some philosophical Turing test, something more practical was happening in robotics labs from Palo Alto to Shenzhen.

Quietly. Steadily. All at once.

A language model that can reason is powerful. A language model that can reason and pick up a screwdriver is a different thing entirely. That distinction — between digital intelligence and physical intelligence — is the most important technological shift of our decade. We crossed the threshold sometime around mid-2025. Nobody rang a bell. Nobody needed to.

What happened was a convergence. Four things matured at roughly the same time, and their intersection created something none of them could have managed alone.

Computer vision reached near-human performance — and in some narrow domains, surpassed it. The jump from 95% accuracy to 99.5% doesn't sound dramatic. In robotics, it's the difference between a machine that drops a cup occasionally and one that doesn't. Between a demo reel and a purchase order.

Transformer architectures proved they could handle multimodal inputs simultaneously. The same architectural insight that made GPT work for language turned out to generalise well to vision, proprioception, force feedback, spatial reasoning. You could feed a robot model a camera stream, joint angle data, tactile sensor readings, and a natural language instruction, and the thing would figure out what to do. Not perfectly. Not every time. But well enough. These things don't start at perfection. They start at well enough.

Sim-to-real transfer reached the point where you could train a robot entirely in simulation — millions of episodes of trial and error in a virtual physics engine running at thousands of times real speed — and then deploy the learned policy on physical hardware with minimal fine-tuning. It used to take months to teach a robot arm to pick up a cup. Now you could simulate a billion cup-grasps overnight and have a working policy by morning. The economics of robot learning changed dramatically.

And hardware costs followed the same deflationary curve that made smartphones ubiquitous fifteen years ago. The bill of materials for a humanoid robot dropped below $20,000 in some configurations. Not the selling price — the cost. That's the number that makes venture capitalists spill their oat milk lattes. By early 2026, you can buy a general-purpose humanoid robot for roughly the cost of a mid-range car. That sentence would have been science fiction three years ago. It's a product listing now.

What I saw that changed my mind

I should be transparent: I find this stuff thrilling. I grew up watching Star Wars and reading Asimov, and I am not, in my heart, a neutral observer of robots learning to walk. I'm a forty-something man who still gets a bit giddy about this, which is either endearing or pathetic depending on your generosity.

But the giddiness turned into something else entirely last autumn. I was at a warehouse demo — one of those carefully managed showcases where the company controls everything and the lighting is suspiciously good. A Figure 02 was doing pick-and-place work, moving boxes from a conveyor to a pallet. Routine stuff. And then a box arrived at an odd angle, partially crushed, and the robot paused — maybe three hundred milliseconds, not long enough for a human to notice consciously — and adjusted its grip. It tilted its wrist, redistributed the pressure, placed the box gently rather than firmly. A tiny, improvised act of physical intelligence that nobody had programmed for that specific scenario.

I felt something shift in my chest. Not fear. Recognition. That pause, that micro-adjustment — that's what learning looks like. Not the dramatic Terminator kind. The quiet kind. The kind you do a thousand times a day without noticing. And the robot was doing it.

The uncanny valley isn't about robots looking almost human. It's about humans realising they're not as special as they thought.

What this means for actual people

My youngest is eight years old. She watched a video of Figure 02 working alongside humans in a warehouse and said, completely matter-of-factly: "Oh, so they have robot helpers now." No wonder. No fear. No philosophical crisis. Just acceptance, the way you'd accept that Wednesday follows Tuesday.

That total lack of surprise tells you something about what's coming. Her generation won't think of robots as science fiction or novelty. They'll think of them the way my generation thinks of smartphones — always there, mostly helpful, occasionally annoying, impossible to imagine life without.

Children aren't afraid of the robots. They want to name them. They want to know if they sleep. They're asking better questions than most adults, because they haven't yet learned that they're supposed to be afraid.

For the older demographic — my parents' generation, people in their seventies and eighties — robots represent something different entirely. Not novelty. Not threat. Independence.

A robot that can help you get out of bed in the morning. One that notices if you've had a fall and calls for help before you even ask. One that helps you to the bathroom at three in the morning without requiring you to press a button and wait twenty minutes for a stranger in scrubs to arrive. Without requiring you to ask for something so intimate that the asking itself is a small humiliation, repeated daily, eroding something essential.

This isn't luxury tech. This is dignity. This is someone being able to stay in their own home for five, maybe ten more years instead of moving into care. If you don't think that matters, you haven't sat in a hospital corridor with a parent who's just been told they can't go home anymore. I have. It was a Wednesday afternoon in November. The fluorescent lighting made everything look worse than it was, which was already bad enough.

Note

The most useful applications of humanoid robots won't be in factories. They'll be in homes. In care facilities. In the daily, unglamorous, essential work of helping people live with dignity. The headlines will be about warehouses and manufacturing. The real story will be about grandmothers.

The care sector in the UK is already at breaking point — understaffed, underfunded, running on the goodwill of people who are paid barely above minimum wage to do some of the most emotionally demanding work that exists. Humanoid robots won't replace carers. But they could handle the physical labour — the lifting, the fetching, the three-AM monitoring — and free human carers to do what humans are irreplaceably good at: being present, being kind, holding someone's hand when they're frightened. The machine handles the weight. The human handles the meaning.

The responsibility question

Every major technology creates winners and losers. The printing press empowered readers and bankrupted scribes. The automobile liberated travellers and displaced an entire ecosystem of horse-related professions that nobody mourns because we've forgotten they existed. And the washing machine — the technology I keep returning to — didn't just clean clothes. It freed millions of women from hours of daily manual labour and is arguably the most underappreciated catalyst of social change in the twentieth century. Nobody writes odes to the washing machine. Somebody should.

AI and robotics will follow this pattern, but faster and at a larger scale than most previous examples. Warehouse workers, delivery drivers, certain categories of manufacturing labour, cleaning staff, security guards — these jobs will be affected first and hardest. And the people in those jobs are disproportionately the people who can least afford disruption. They don't have savings to retrain on. They don't have the luxury of "pivoting to the knowledge economy," which is what comfortable people in comfortable offices say when they mean I'll be fine; good luck to you.

But here's the contrarian claim I've been circling, the one that keeps me up at night: we're having the wrong debate about robots and jobs. The real scandal isn't that robots might take jobs — it's that we've built an economy where losing your job means losing your dignity. Fix the economy and the robot question answers itself. A society where employment is the sole gateway to housing, healthcare, and self-worth is a society that was fragile long before any robot stood up. The robots didn't create that fragility. They just made it impossible to ignore.

The companies building these robots have a moral obligation — not a PR obligation, an actual moral one — to think seriously about transition. Responsibility scales with capability. The more powerful the technology, the less optional the ethics become.

The McLuhan mirror

There's a Marshall McLuhan line I've been carrying around for months, turning it over like a stone worn smooth:

We shape our tools, and thereafter our tools shape us.

— Marshall McLuhan

He was talking about television. But the insight scales to any technology that becomes ambient — any technology that stops being something you use and becomes something you live with.

Humanoid robots will shape us too. They'll change how we think about labour, about the body, about what it means to be helped, about what it means to be useful. They'll change us in ways we haven't anticipated, because we never do. We adopt the tool thinking we'll remain the same person holding it. We never do.

Whether we shape the tool first, with intention and care — or whether we do what we usually do, which is move fast, break things, and commission a thoughtful retrospective about the damage once it's too late to undo any of it. The hard part isn't knowing the right answer. It never is. The hard part is choosing it when there's money to be made from choosing wrong.


Standing up

The robots stood up in 2025. Some of them stumbled. Some fell over and had to be helped back to their feet by engineers with laptops and mildly panicked expressions. Some moved with a fluidity that was unsettling — too smooth, too purposeful for a machine. I watched the footage of the new Atlas doing a backflip dismount and landing perfectly, and I didn't quite know what to feel.

My daughter asks me if robots dream. I tell her I don't know. She asks me if they get lonely. I tell her they don't have feelings, not like we do. She considers this for a long moment — longer than you'd expect from an eight-year-old — and says: "That's sad." And I think, not for the first time, that children understand things about this moment that the rest of us are too busy or too invested to see clearly. They see the robot and ask whether it's happy. We see the robot and ask whether it's profitable. Both questions matter. Only one of them is wise.

The robots stood up. Now it's our turn. For equity in who benefits. For dignity in how the technology is deployed. For honesty about the disruption that's coming, even when honesty is commercially inconvenient.

The technical question has been answered. The robots can stand. They can walk. They can work. What remains is the harder question: will we use them wisely?

The next chapter arrived early, on two legs, and it's standing in a warehouse in Stuttgart, waiting for instructions.

Let's make sure we give it good ones.