Coding with Ghosts
What happens when your IDE thinks faster than you do
Sajad Saleem
the mediocre generalist
I wrote my first line of code when I was about thirteen. A for loop in BASIC on a machine that weighed more than I did and had less computing power than the smartwatch I now forget to charge. It printed my name fifty times on a flickering green screen. I was convinced I'd discovered sorcery.
That feeling — the specific, electric thrill of telling a machine what to do and watching it obey — has never fully left me. Thirty-odd years later, the machines are vastly more powerful, the languages far more elegant, and the green screen has been replaced by a high-resolution display running a colour scheme called "One Dark Pro" because apparently even our text editors need aesthetics now. But something has changed. Something I'm still not sure whether to call progress or loss — or whether the honest answer is both, sitting uncomfortably in the same chair.
The ghost arrives
If you've used Cursor, Claude Code, Windsurf, Copilot, or any of the current generation of AI coding tools, you know the feeling. You're typing and the machine is completing your thought. Not autocomplete in the old sense. Not snippets. It's anticipating your architecture. Suggesting the function you were about to write, structured better than you would have written it, with edge cases handled that you hadn't thought of yet.
It's like pair programming with a ghost. A very competent, slightly unsettling ghost that has somehow read every Stack Overflow answer ever written, every GitHub repository ever committed, every technical blog post and API doc and computer science textbook — and synthesised all of it into something that looks like intuition. Except it has no body, no coffee preference, and no opinions about whether tabs are better than spaces. Which, honestly, makes it a better pair programmer than most humans I've worked with.
I call them ghosts because they have no permanence, no identity in any meaningful sense. They appear when you invoke them. They speak in your language, in your codebase, in the context of your specific problem. Then they vanish, leaving behind only the code — which may or may not be good, may or may not be correct, but is always, always fast.
The first time an AI suggested a function I hadn't written yet — one that solved a problem I'd only just started thinking about — I had to put my coffee down and stare at the screen for a full thirty seconds. It was a Wednesday, about half nine in the morning, the kind of grey Midlands day where the sky looks like someone washed it at the wrong temperature. The AI had just written, unprompted, a debounce handler with exactly the timeout value I would have chosen. It wasn't the code that shook me. It was the taste. The machine had taste.
That word — taste — is doing heavy lifting here. I don't mean aesthetic preference the way a designer has taste, or a chef, or a poet. I mean something more specific: the AI was making choices. Between equally valid implementations, it chose the more elegant one. Between naming conventions, it picked the more readable one. Between architectural patterns, it selected the one that would scale better, even though I hadn't mentioned scalability. It was exercising judgment. Not that a machine can write code — we knew that was coming. That a machine can prefer one way of writing it over another.
The green screen and the grey area
Back to that green screen. 1993, maybe 1994. A Commodore, or possibly a BBC Micro. My memory is unreliable on this point, and I've learned to distrust nostalgia — it's the mind's Instagram filter, making everything warmer and more significant than it was. What matters isn't the hardware. It's what the hardware demanded of me.
To make that machine do anything, I had to understand it. Not use it — understand it. BASIC was interpreted line by line. Variables lived in memory locations I could, terrifyingly, access directly. If I messed up a GOTO statement, the programme would loop forever and I'd have to pull the plug — literally, physically yank the plug from the wall — because there was no graceful way to exit. The machine didn't forgive. It didn't suggest. It just sat there, cursor blinking, waiting for me to figure it out or give up.
The friction was real. Every programme was a fight. Every working output was a victory earned through confusion, frustration, and the kind of deep problem-solving that only happens when you have absolutely no idea what you're doing and no one to ask. My school had one computer teacher, Mr. Patel, who knew less about programming than I did but had the good sense to leave me alone with the machine and a photocopied manual missing its last twelve pages.
And that friction — I know I risk sounding like a nostalgic man shouting at clouds here, but it needs saying — that friction was educational. Not a byproduct of learning to code. The actual mechanism by which learning happened. The understanding was in the struggle, not despite it.
Here's the thing nobody in the industry wants to say: most production code written by humans was never that good. We've romanticised the craft because we're afraid of what it means that a machine can match it. The golden age of hand-written code wasn't golden — it was just all we had. I've maintained enough legacy systems to know this in my bones. The lovingly hand-crafted code was often a mess — inconsistent naming, missing error handling, architectural decisions made at 2am and never revisited. We told ourselves it had character. It had bugs.
That doesn't mean the struggle was worthless. Debugging a segfault at 2am taught me how memory actually works. Wrestling with CSS for three hours taught me how browsers think, and how freely they lie to you about it. Getting a race condition in production taught me humility. Each failure left a scar, and each scar was knowledge. But the knowledge was in the learning, not in the output. The output was often mediocre. We just had no basis for comparison.
The tools of December 2025
The current landscape is moving fast enough that this section might be outdated by the time you read it. This is not hyperbole. I've considered publishing it with an expiry date, like milk.
Cursor is probably the most popular AI-native code editor. Built on VS Code, rewired from the ground up to treat the AI as a first-class participant in development. You can describe a feature in English and watch it materialise across multiple files. You can ask it to explain code you didn't write, refactor code you wrote badly, or generate tests for code you were too lazy to test yourself — which, if we're being candid about the state of the industry, is most of it.
Claude Code takes a different approach — it lives in the terminal, which appeals to a certain kind of developer. The kind who thinks GUIs are for the weak. You give it a task in natural language and it reads your codebase, plans an approach, writes the code, and runs it. The whole loop. Like having a very competent junior developer who works at inhuman speed and never needs chai or reassurance.
Windsurf (formerly Codeium) sits somewhere in between — a full IDE with AI deeply integrated. There are others: Copilot, Aider, Replit Agent. New ones appear weekly. The market is moving at the speed of panic and ambition, which in tech are often the same chemical compound.
What all these tools share is that they change the texture of coding. The fundamental loop — think, type, run, debug, repeat — has been compressed into something closer to: think, describe, review, adjust. The machine handles the typing. You handle the thinking. In theory. In practice, there's a seductive danger. The loop can quietly become: vaguely gesture at what you want, accept whatever the AI generates, move on. And that works. Surprisingly often, it actually works. Until it doesn't — and you're staring at a production bug in code you don't fully understand, written by an entity that no longer exists, with no memory of why it made the choices it made. It's like inheriting a house from a ghost. Lovely house. No blueprints.
The generalist's moment
I've called myself a generalist for years, and I mean it with affection and accuracy. I know a bit of Python, a bit of TypeScript, some SQL, a smattering of Go. I can hold a reasonably informed conversation about databases, UI design, deployment pipelines, API architecture, and business strategy — but I'm not world-class at any of them. I couldn't write a compiler. My understanding of distributed consensus algorithms is "they exist and they're hard and I'm grateful someone else figured them out."
This used to be a weakness. The specialist always got the job. But the ground has moved.
The AI doesn't need you to be an expert. It's the expert. What it needs from you is something it cannot generate for itself: judgment about what matters. Context. Taste. The ability to look at a technically correct solution and say "yes, but that's not what the user actually needs." The AI can checkmate. It needs you to decide which game is worth playing.
And the best prompt isn't written by the best programmer. It's written by the person who understands the domain, the user, the constraints, and the aspiration — someone who can see across disciplines. The person nobody wanted to hire is suddenly the person everybody needs.
I've built things in the last six months that would have taken me years without AI assistance. A full-stack application with authentication, a database layer, real-time features, and deployment configuration — built in a weekend. Not because I suddenly became a better programmer. Because the barrier between knowing what you want and having it exist has collapsed. The bottleneck is no longer skill. It's clarity of thought.
I'm not romanticising the generalist at the expense of the specialist. Specialists remain essential — someone has to train the models, optimise the infrastructure, push the boundaries of what's possible. What's changed is that the generalist finally has a role that plays to their strengths: specification over implementation, synthesis over depth, knowing where to drive rather than how the engine works.
The flow state, altered
Programmers talk about flow state with an almost religious reverence. That zone where code pours out of you, where the problem and the solution exist simultaneously in your mind, where hours pass like minutes. It's addictive. It's the reason most of us started coding in the first place.
Coding with AI changes the flow state. It doesn't destroy it — I want to be precise about this — but it reshapes it. The old flow was about implementation: your fingers knew the syntax, your mind held the algorithm, and the two danced together in a way that felt almost physical. Now the flow is about direction: you hold the vision, the AI handles the execution, and the dance is faster but different. More like conducting than playing. More like directing than acting.
I miss the old flow sometimes. There. I've said it. I miss the intimate, tactile relationship with the code. The feeling of knowing every line because I wrote every line, character by character, semicolon by agonising semicolon. There was a particular satisfaction in it — the same satisfaction, I imagine, that a carpenter feels in a hand-planed surface versus a machine-sanded one. The result is similar. The experience is not.
AI-assisted flow is more productive — significantly more productive — but it's more distant. I'm one step removed from the material. Working with the code through an intermediary. The intermediary is excellent, but it's still an intermediary. There's a word for what's been lost. I think the word is intimacy.
The question isn't whether AI will replace programmers. It's whether programmers were ever doing what they thought they were doing. Were we craftspeople, or were we translators — converting human intent into machine instruction through an absurdly labour-intensive process? If we were translators, then a better translation tool isn't a threat. It's a liberation. If we were craftspeople, something irreplaceable is being lost. I suspect we were both, and the pain of this moment is in being forced to decide which one mattered more.
The uncomfortable question
What happens to the craft?
Not to software — software will be fine, probably better than fine. Not to the industry — the industry will adapt, as it always does, with the usual mixture of panic, opportunity, and shameless reinvention. I mean the craft. The deep, personal, hard-won understanding of how software actually works. The knowledge that lives not in documentation but in scar tissue.
If a new developer in 2026 starts by describing what they want and having an AI build it — and the thing works, and the tests pass, and the users are happy — have they learned to code? Or have they learned to describe code? Is there a difference that matters?
I think there is. But I might be wrong. The uncertainty is the point.
Consider: a new developer who uses AI to build their first ten projects will have shipped more software in their first year than I shipped in my first five. They'll have seen more architectural patterns, more edge cases, more solutions than I encountered in a decade of doing it the hard way. Maybe that exposure is understanding, arrived at through a different door.
Or maybe it's the difference between a pilot who's flown a thousand hours in a simulator and one who's flown a thousand hours in actual weather. The simulator pilot has more data. The real pilot has more judgment. In normal conditions, you can't tell them apart. In a crisis, you can.
We're training a generation of builders who may never touch the raw materials. That could be liberation. It could be a very elegant form of fragility. We won't know which until something breaks badly enough to test it. By then it'll be too late to go back to the green screen.
The taste question
I keep coming back to taste. Because taste is supposed to be the human moat — the thing machines can't do, the ineffable quality separating great software from merely functional software.
And the machines are developing it.
Not taste in the philosophical sense — they don't prefer elegance. But they produce it. Consistently. The code they suggest isn't just correct; it's often better-structured than what I would have written. Better variable names. More thorough error handling. More scalable architecture. Not always. But often enough that I've started wondering whether my code was ever as good as I thought it was, or whether I just never had a basis for comparison.
Maybe taste was never as ineffable as we believed. Maybe it was always pattern recognition operating at a level of complexity we hadn't bothered to analyse. Maybe the "eye" of a great designer is really just the accumulated weight of having seen ten thousand designs and developed statistical intuitions about which patterns work. If so, then of course a machine trained on billions of examples would develop something functionally equivalent. That's either humbling or unsettling, depending on your relationship with your own creativity.
Whether AI has taste is almost beside the point. The real question is whether taste without consciousness means anything — and whether, practically speaking, the distinction matters if the code ships and the users are happy and the architecture holds. Philosophy is a luxury. Shipping is a deadline.
The path forward
The answer isn't to reject the tools. That's nostalgia cosplaying as wisdom — the equivalent of insisting on a horse because you don't trust the engine.
But the answer also isn't to surrender to them. To become a pure prompt-writer, a director who never acts, a conductor who's never held an instrument. Because the ghosts, competent as they are, still need someone who understands the music. Someone who can hear when the AI's suggestion is technically correct but aesthetically wrong. Someone who knows what "good" sounds like because they've spent years making things that were bad and slowly, painfully learning the difference. You can't conduct an orchestra if you've never held a violin. You can hold it badly — I certainly did — but you have to have held it.
Learn the foundations. Write code by hand sometimes, the way a chef who uses a food processor should still know how to use a knife. Understand what the AI is doing, even when you let it do it. Own the semantics while delegating the syntax. Take responsibility for the why even when the machine handles the how.
And talk about this. Openly. Without the false confidence of the evangelists or the performative modesty of the sceptics. We're in uncharted territory, and the only way to navigate it is to think out loud, together, with the humility to admit that none of us really knows where this goes.
A ghost story, in closing
Last Tuesday — it was about quarter past eleven, and I'd just made my third cup of chai, the one that signals you've abandoned all pretence of going to bed at a reasonable hour — I was working on a particularly tricky piece of code. A real-time data pipeline with some gnarly concurrency issues. The AI had been helping all evening: suggesting approaches, writing implementations, catching bugs. We were in a rhythm, the ghost and I.
At one point, I paused. Looked at the screen. Took a sip of tea that had gone cold without my noticing. And I realised something: I couldn't tell which parts of the code were mine and which were the machine's. Not because it all looked the same — it didn't, exactly — but because the thinking had been collaborative. The boundaries had blurred. My ideas had shaped its suggestions, and its suggestions had shaped my ideas, in a feedback loop so tight that authorship had become a meaningless concept.
Is that wonderful or disturbing?
I don't know. But I know it's real. And I think it's the future. The programmers who thrive won't be the ones who reject the ghosts or worship them, but the ones who learn to work alongside them — with open eyes, clear intention, and enough foundational understanding to know when the ghost is helping and when it's haunting.
The illiterate of the 21st century won't be those who cannot read and write. It'll be those who cannot learn, unlearn, and relearn.
— Alvin Toffler
The green screen is gone. The ghosts are here. And somewhere between the nostalgia and the acceleration, the code is still the code. It just has more authors than it used to.