Heart & Society|14 February 2026|14 min read

Why Promptology

On naming things, asking questions, and the art of the prompt

SS

Sajad Saleem

the mediocre generalist

The quality of your life is determined by the quality of your questions. This isn't a self-help platitude. It's an observation so old that Socrates built a career on it and every Zen koan depends on it. And now, in an unlikely convergence that would make both traditions deeply uncomfortable, so does every language model sitting behind a blinking cursor, waiting for you to say something worth responding to.

Intelligence — or something that behaves indistinguishably from it — is available on tap. You can summon it with a sentence. The catch is that the sentence matters. The prompt matters. How you ask determines what you receive. That's an old truth wearing very new clothes.

Note

The word "promptology" isn't in any dictionary. I made it up. But the best words are always made up — someone just has to say them first, and mean it.

Promptology is the word I coined for this practice. Not prompt engineering — that sounds like something you'd find sandwiched between "blockchain enthusiast" and "synergy evangelist" on a LinkedIn headline, right next to a headshot taken in 2014. Promptology is the study of how humans and machines think together. It's the space between the question and the answer, the gap where intention meets interpretation. It is something close to a new liberal art.

Here's the claim I'll stake this whole site on: Promptology isn't a skill. It's a worldview. And most people don't have one yet. They have tools. They have access. They have opinions about which chatbot is better. What they don't have is a framework for thinking about what it means to collaborate with non-human intelligence — ethically, practically, philosophically. That's the gap this site exists to fill. Not with answers. With better questions.

The domain that waited

I registered promptology.co.uk in early 2023. It cost me about eight quid. I remember the exact moment — sitting in my home office, late January, the radiator cranked up to a setting that would horrify my energy provider, half-eaten packet of digestives on the desk. I'd just spent three hours talking to ChatGPT about the philosophy of consciousness. Not because I needed to. Because I couldn't stop. I'd told myself "five minutes more" at least three times. That was ninety minutes ago.

ChatGPT had launched a couple of months earlier, and something had shifted. Not in the technology — I'd been following machine learning for years, knew the trajectory, had read the papers (well, skimmed the abstracts and stared at the diagrams, if we're being honest). What shifted was the feeling. For the first time, I was having a conversation with something that wasn't human, and it felt substantive. Not perfect. Not sentient. But surprisingly useful. Like discovering your calculator has opinions.

So I bought the domain. And then I did absolutely nothing with it for three years.

Three years. In AI time, that's roughly four geological epochs. When I bought that domain, GPT-3.5 was the state of the art. Now we have models that reason, plan, write code, compose music, and — this still catches me off guard — surprise me with insights I hadn't considered. The gap between January 2023 and February 2026 is one of the wider three-year stretches in computing I've lived through.

Why the delay? Partly imposter syndrome. Who am I to write about AI? I'm not a researcher, not a founder, don't have a PhD in machine learning or a corner office at DeepMind. I'm a generalist — a self-described mediocre one — who happens to be fascinated by what's happening and concious of where it's going.

But partly — and this is harder to admit — I was afraid. Afraid the site would be irrelevant before the DNS propagated. Afraid I'd get things wrong. In AI, writing about the present tense feels like writing about the past. By the time you finish the sentence, the sentence is history.

I'm publishing anyway. Imperfect beats silent. Every time.

What promptology actually means

Words matter — especially made-up ones, because they haven't yet been ruined by overuse.

Promptology, as I use it, is the study and practice of human-AI communication. But that's the clinical definition, and it misses the heartbeat. What I really mean is something broader: the discipline of asking good questions in an age where the quality of the answer depends almost entirely on the quality of the ask. The question is the product. The answer is just the receipt.

This isn't new. Socrates built an entire philosophical method around it. Journalists live and die by it. Therapists and teachers and scientists — they all know that the question shapes the answer as much as the answer shapes understanding. What's new is that we now have a universal interlocutor. An entity that will respond to any question, in any domain, at any hour, with varying degrees of brilliance and nonsense, often in the same paragraph.

And the skill isn't learning to use AI. A child can do that. My eight-year-old does it daily, with alarming competence and zero philosophical anxiety. The skill is learning to think with AI. To push back when it's wrong. To recognise when it's subtly right in ways you hadn't considered. To treat it not as an oracle or a servant, but as a collaborator with very specific strengths and very specific blindnesses.

Einstein once wrote that the formulation of a problem is often more essential than its solution. We now have near-unlimited access to solutions. What we're short of is people who can formulate the right problems. Everyone talks about AI replacing jobs. Nobody talks about AI replacing excuses. Tools don't replace thinking. They reveal how little of it you were doing.

Promptology isn't about writing clever prompts. It's about thinking clearly enough that your prompts become almost unnecessary — because you've already done the hard cognitive work of understanding what you actually need. The prompt is the last mile. The thinking is the journey.

The three pillars

When I finally sat down to structure this site, I kept arriving at a simple observation: AI is reshaping three domains simultaneously, and most commentary only covers one.

Silicon & Photonics — the digital realm. Language models, coding agents, reasoning systems. The invisible intelligence reshaping how we think, write, build, and create.

Steel & Sinew — the physical realm. Robots that walk, hands that grasp, machines that see and navigate and manipulate. AI leaving the screen and entering the room. The code is growing a body — I'm still not sure whether that sentence excites me or unsettles me. Depends on the day.

Heart & Society — the human realm. Education, accessibility, equity, creativity, employment, dignity. The questions that don't have technical answers because they're not technical questions. What should AI do? Who benefits? Who's left behind?

These three pillars aren't categories so much as lenses. Every AI development worth discussing touches all three. A humanoid robot (steel) running a large language model (silicon) that helps an elderly person stay independent in their home (heart) — that's the whole picture. Most AI commentary lives entirely in the silicon pillar. Steel gets covered as spectacle. Heart gets covered as anxiety. Promptology tries to hold all three simultaneously, because reality doesn't come in neat categories, no matter how tidy the website navigation.

AI for good — a filter, not a slogan

I'm aware the subtitle of this site — "AI for Good" — sounds like a conference title. The kind of phrase printed on lanyards, projected onto screens in rooms full of people who nod approvingly and then go back to optimising engagement metrics. I've been to those conferences. The coffee is always terrible and the conviction is always thin.

That's not what I mean by it.

Most AI ethics discourse is theatre performed by people who've never built anything. The real ethical work happens in the building — in the thousand small decisions about who the thing serves. "AI for Good" as I use it here is a filter. A question I ask before writing about anything: does this matter for actual humans? Not for shareholder value. Not for benchmark scores. For people. For families. For the quiet, unglamorous business of living a decent life. Technology doesn't care about humans. Humans have to care about humans. That's the entire job.

I have kids aged 8 to 18. This isn't abstract for me. It's a Tuesday evening conversation sandwiched between homework complaints and arguments about screen time.

"For good" as a filter means I'm interested in AI that helps a dyslexic child read. AI that gives a rural clinic in sub-Saharan Africa access to diagnostic expertise it could never otherwise afford. AI that lets my parents' generation age with dignity. AI that makes the mundane effortless so humans can spend more time on the meaningful.

It also means I'm interested in the failures. The biases baked into training data. The surveillance creep. The concentration of power. The hollowing out of creative professions. Optimism without accountability is just marketing.

The mediocre generalist

I should explain the tagline. "The mediocre generalist." It's self-deprecating, obviously, but it's also a philosophical position.

I can write passable code in four languages — passable being the operative word, the kind of code that works but makes senior developers wince slightly, like watching someone hold a pen with their whole fist. I understand, roughly, how transformer architectures work. I tried to explain them to a family friend once. He asked if it was about the robots from the films. In fairness, he wasn't entirely wrong.

I've read enough philosophy to be dangerous at dinner parties. I can hold conversations about education policy, robotics, business strategy, UI design, and the epistemology of large language models — but I am demonstrably not world-class at any of them. My superpower is being mediocre across an unusually wide front.

For most of my career, this was a liability. The world rewarded specialists. Depth was the currency.

But that equation has changed. In a world where AI can provide depth on demand — where you can summon domain expertise with a well-crafted prompt — the value equation looks different. The scarce resource isn't knowledge. It's synthesis. The ability to connect dots across domains. To see patterns that specialists miss because they're too deep in their own trench to notice the sky.

We don't fear AI because it's alien. We fear it because it's familiar — it thinks the way we wish we thought, and that's uncomfortable. It's patient where we're impulsive, thorough where we're lazy, and relentlessly available where we need sleep. The generalist doesn't compete with AI on any of those fronts. The generalist competes on breadth of curiosity — on the ability to ask: "Wait, how does this thing in biology connect to that thing in economics, and what does it mean for education?" The AI can answer that question brilliantly. But only if someone thinks to ask it.

In the age of AI, the most valuable human skill isn't expertise. It's the ability to wander between disciplines, make unexpected connections, and ask questions no specialist would think to ask — because they're too busy being expert to be curious. Not depth. Not credentials. Curiosity, breadth, and the willingness to be wrong in public. Which, come to think of it, is a reasonable description of blogging itself.

What you'll find here

Promptology will publish essays, not articles. The difference matters. Articles inform. Essays explore. I don't have answers to most of the questions I'm asking. What I have is a conviction that the questions matter, and that thinking out loud — in public, imperfectly, with the courage to change my mind — is more valuable than waiting until I'm certain. Certainty is for people who haven't been paying attention.

You'll find pieces about coding with AI and what it does to the craft. Humanoid robots and what it means for the physical world to become programmable. Education and whether our schools are preparing children for a world that no longer exists. Creativity and whether AI enhances it or hollows it out. The ethics of synthetic intelligence, and the deeply practical question of whether your job will exist in five years.

Some of these pieces will be wrong. Technology moves faster than opinion, and I'd rather be honestly wrong than safely silent. I'll correct myself when the evidence demands it. That's not weakness. That's the whole point.


A final thought

Three years, this domain sat there. It cost eight quid. The writing cost considerably more — in time, in vulnerability, in the willingness to say "I don't know" in a world that rewards false certainty. Eight quid for the address. Everything else for the furniture.

But here we are. Promptology is live. The mediocre generalist has a platform. And the questions are already better than the answers.

The best time to plant a tree was twenty years ago. The second best time is now.

— Chinese Proverb

I'll be here, thinking out loud. Pull up a chair if you want.