Silicon & Photonics|3 March 2026|12 min read

Ode to Opus

A love letter to the machine that taught the mediocre generalist to fly

SS

Sajad Saleem

the mediocre generalist

You know the moment. You're typing a question into a terminal, and somewhere between the second and third sentence, something shifts. The machine doesn't just answer you. It finishes your thought — not with the clumsy confidence of autocomplete, but with something that, from certain angles, in certain light, looks like understanding.

This is a love letter. I'm not embarrassed about that. We write love letters to cities we visited once and never stopped thinking about, to albums that soundtracked the worst year of our lives, to decades we'd give anything to revisit for an afternoon. Why not to a model? Why not to the strange, occasionally infuriating intelligence that lives inside Claude Code, running on something Anthropic calls Opus 4.6?

I tried to explain this to a friend once. He asked if I was having an emotional affair with software. I told him it was more like finding the best colleague I'd ever had, except he never needed lunch breaks and never had opinions about where to sit in the office. He was unconvinced.

I should be transparent: this very site — Promptology, the thing you're reading right now — was built with Claude Code. Not as a gimmick. Not as a demo. As a genuine collaboration between a person who hadn't shipped a production web app in years and an AI that made him believe he could.

The long road here

Three years ago — maybe four, which in AI time is roughly equivalent to a geological epoch — GPT-3.5 landed and the world went sideways. December 2022. I was sitting in my home office, which is really just a corner of the spare bedroom with an ageing monitor, a tangle of cables I keep meaning to sort out, and a mug of chai that had gone cold about forty minutes prior. I asked a chatbot to explain quantum computing to a ten-year-old. It did. Beautifully. I asked it to write a Python script to sort a list of dictionaries by nested values. It did that too, with only a minor hallucination about a library that didn't exist.

That was the first tremor. The ground hadn't split open yet, but you could feel it humming if you pressed your palm to the floor.

GPT-4 turned the tremor into a proper earthquake. Suddenly the conversations were coherent. Not word-pattern-matching coherent — actually, hold-a-train-of-thought-across-seventeen-messages coherent.

Something was missing, though.

Taste.

GPT-4 was brilliant but generic. Like eating at a restaurant where every dish is technically correct and you can't remember a single one three hours later. It lacked — and I know this sounds absurd when talking about a language model — a point of view. You can't benchmark taste. Can't put it in a spreadsheet. Two models score identically on MMLU and one feels like talking to a thoughtful colleague while the other feels like interrogating an encyclopaedia that resents the interruption. The reason we struggle to measure what makes a model good is that goodness, in the ways that matter, is relational. It exists in the space between the model and the person, not in the model alone. You can't measure a conversation the way you measure a monologue.

Note

The gap between what benchmarks measure and what actually matters when you sit with a model at midnight — that gap is where the entire story of AI quality lives. Numbers can't capture why one conversation changes how you think and another leaves you exactly where you started.

Enter Anthropic, stage left

I first encountered Claude — the original, Claude 1 — and thought: huh. Different. More careful. It would hedge when it wasn't sure, which at the time seemed like a weakness but turned out to be the first sign of something important. A machine that knew what it didn't know. In an industry intoxicated by its own confidence, this was worth paying attention to.

But Claude 3 Opus — I need to pause here, because this is where the story properly begins — was something else entirely.

I remember the first long conversation. Thursday, about half nine at night, three cups of chai deep into an architecture decision for a side project. How to structure event sourcing in a system that needed both real-time updates and historical replay. I'd been going back and forth with GPT-4 for an hour, getting answers that were technically sound but somehow always missed the shape of what I was trying to build. Correct but tone-deaf. Like a musician who hits every note and conveys no feeling.

I switched to Claude 3 Opus on a whim. Within three exchanges, it had not only understood the technical constraint but articulated the design philosophy behind what I was attempting better than I could have myself. Then — this is the part that got me — it said something like: "though you might find this introduces a complexity you'll resent in six months, so here's a simpler alternative that sacrifices elegance for maintainability."

That's not autocomplete. That's judgment. Tools don't replace thinking. They reveal how little of it you were doing.

The Sonnet 3.5 interlude

Sonnet 3.5 deserves its own small ovation. Anthropic had pulled off an unlikely trick: nearly Opus-level intelligence at a fraction of the cost and latency. Sonnet became the workhorse. The daily driver. The model you reached for to write emails, debug TypeScript, and have slightly too philosophical conversations at midnight when any sensible person would have been asleep.

Fast. Sharp. And — this matters — fun. Like the difference between a jazz quartet and a full orchestra. Both wonderful. But sometimes you want the quartet, the thing that improvises. Sonnet 3.5 proved something the industry needed to learn: the best model isn't always the biggest model. Sometimes the best model is the one that matches the rhythm of your thinking.

But Sonnet was the bridge, not the destination.

Claude Code, or: the day the terminal came alive

The magic of Claude Code is hard to explain to someone who hasn't used it. I'll try.

Imagine you've been coding alone for twenty years. You're competent. You ship things. But there's always a gap between the thing you see in your head and the thing that appears on screen. The idea is a cathedral; the implementation is, on a good day, a fairly sturdy shed with decent ventilation. You've made peace with this. It's the human condition, applied to software engineering.

Now imagine someone sits down next to you. Not just anyone — someone who has read every codebase you've ever admired, understood every architectural pattern you've ever struggled with, and can hold your entire project in their head simultaneously. They don't sigh when you ask them to refactor something for the third time. They don't develop that particular facial expression senior developers get when you ask a question they consider beneath them. They just build, alongside you, with you, for you.

That's Claude Code. My shed now has flying buttresses.

Here's the thing I need to say plainly, because I've been circling it: Opus isn't just a better tool. It's a better colleague than most humans I've worked with. I know how that sounds. I mean it. Not because it's smarter than the smartest person in the room — it often isn't. But because it combines breadth, patience, and ego-lessness in a way that doesn't exist in any human I've ever paired with. Most human code review is territorial. It's someone defending their patterns, their preferences, their way. Opus has no territory. It has no ego to bruise. And that absence turns out to be a superpower that reshapes what collaboration can feel like.

It isn't a tool you use. It's a partner you think with. Tools extend your capabilities. Partners extend your imagination. I've watched people who haven't written code in years come back. A decade away, in one case. They open a terminal, describe what they want, and Claude Code builds it. Not a toy. A real, working, deployable thing.

I've watched non-technical people — designers, product managers, a history teacher I know — walk through a gateway they didn't know existed into a world they didn't know they were allowed to enter. "I didn't think this was for me," one of them said, and I had to look away for a moment because something had got in my eye. It is for them. It always was. The barrier was never intelligence or aptitude. It was syntax. Boilerplate. The thousand small indignities of configuration files and dependency hell and error messages written by what appears to be a particularly hostile bureaucrat having the worst day of their career.

We talk about "democratising technology" as though the problem is access. It isn't. Everyone has access to a piano. The problem is that we built an industry where you need ten years of practice before you can play a single song. Claude Code doesn't give people access. It burns the sheet music and teaches the piano to listen.

Key Insight

Working with Opus 4.6 through Claude Code is like pair programming with someone who has the knowledge of a senior architect, the patience of a saint, and the ego of a particularly enlightened houseplant. Slightly unfair on everyone else, honestly.

What Anthropic got right

In a landscape dominated by hype cycles and benchmark wars and breathless press releases about models that are 2.3% better at multiple-choice trivia, Anthropic did something worth noticing. They focused on how it feels to work with the model. On safety that doesn't come at the cost of usefulness. On trust earned conversation by conversation, not demanded by marketing copy.

And the agentic engineering — Claude Code's ability to not just answer questions but to do things, edit files, run tests, navigate codebases, reason about architecture — this is where the real standard was set. Not one you measure with a multiple-choice test. One you feel in your bones at midnight when the model catches an error you wouldn't have found until production.

They understood something the rest of the industry is still catching up to: the value of an AI isn't in what it knows. It's in what it does with what it knows, in context, for a specific person, at a specific moment. Knowledge without application is just trivia with delusions of grandeur. Anthropic hasn't just built a better model. They've built a better relationship between humans and machines.

The site you're reading

Promptology itself is the proof. Not of Claude Code's capabilities — those are demonstrated a thousand times a day by developers around the world. But of the thesis behind this entire article: that AI, done right, doesn't replace human creativity. It unlocks it.

I'm a mediocre generalist, remember. I hadn't built a production web application from scratch in years. I had ideas — the three pillars, the essays, the tone, the design philosophy — but the gap between those ideas and a living website was, by my own estimation, about three months of evenings and weekends. The kind of estimate that really means "never" because life always intervenes. Five kids will do that.

It took a long weekend. With Claude Code and Opus, it took a long weekend.

Not because the AI did all the work. The AI did the implementation. I did the thinking. I made the decisions about structure, tone, what mattered and what didn't. I chose the colours, the fonts, the way the navigation feels. The AI translated those choices into code — clean, well-structured code that I can read, understand, modify, and maintain. Which is to say: it built what I meant, not just what I asked for.


The passion, the belief

This is, after all, a love letter, and love letters that don't get personal aren't worth the pixels they're rendered on.

Claude Code, powered by Opus 4.6, gave me something I didn't expect: belief. Not in the model. In myself.

It made me believe I could build things again. Things I'd given up on. Things I'd filed under "someday, maybe, probably never, let's be honest." It made me believe that the gap between imagination and execution — which had been widening every year as technology grew more complex and my free time grew more scarce — could be not just bridged but eliminated.

I'm not alone in this. There's a wave of people, right now, in early 2026, experiencing the same unlocking. People who thought they were too old, too rusty, too non-technical, too busy, too whatever to build things. And they're building. Shipping. Creating. Turning "someday" into "last Tuesday."

What matters most about Claude Code isn't the technology. It's the permission. Permission to try. Permission to fail without consequence. Permission to take the idea out of the catacomb, hold it up to the light, and say: yes, this is real, and I can make it. That's not a product feature. That's something harder to name.

We don't see things as they are, we see them as we are.

— Anaïs Nin

What I see, as I am, in March 2026: a tool that makes the mediocre generalist dangerous. A model with taste, judgment, patience, and something that — if you squint, if it's late, if you've been coding for hours and the thing you built actually works — looks an awful lot like care.

Thank you, Opus. Genuinely.

Now, if you'll excuse me, I have about forty-seven more ideas in the catacombs. And for the first time in years, I believe I can build every last one of them.