Silicon & Photonics|25 November 2025|13 min read

The Art of the Prompt

Prompt engineering isn't about syntax. It's about thinking clearly enough that the machine can follow.

SS

Sajad Saleem

the mediocre generalist

The best prompt I ever wrote was three words long.

I won't tell you what those three words were, because they'd be meaningless without context. That's sort of the point. The prompt itself — the text I typed into a chat window — was the smallest part of the work. Before those three words, I'd spent twenty minutes staring at a blank screen. Not writing. Thinking. Figuring out what I actually wanted. Clarifying the outcome in my own head. Discarding the first version, and the second, and the third. Stripping away everything that wasn't essential.

By the time I typed those three words, the hard work was done. The machine did the rest.

That's the thing nobody tells you about prompt engineering. The prompt is the easy part.

The oldest skill in the world

Strip away the jargon — the "system prompts" and "temperature settings" and "chain-of-thought reasoning" — and prompt engineering is just communication. It's the skill of saying what you mean with enough precision that another intelligence can act on it.

We've been doing this forever. Every time you write a brief for a designer, you're prompt engineering. Every time you explain to a builder what you want the kitchen to look like, you're prompt engineering. Every time you write an exam question, a job description, a recipe, a set of instructions for the babysitter — you're doing the exact same thing.

The only difference now is that the listener is a large language model. It takes you more literally than a human would. It has no shared context unless you provide it. It won't ask clarifying questions unless you invite it to. And it will cheerfully do exactly what you asked for, even when what you asked for isn't what you meant.

Which, if you think about it, is also true of most humans on their first day at a job.

The skill isn't new. The medium is new. And confusing the medium for the skill is where most people go wrong.

The thinking problem

Here's what I've noticed, working with AI tools daily and watching others do the same: most bad prompts aren't badly written. They're badly thought.

The person sits down, types something vague — "write me a marketing email" or "help me with my strategy" — gets something generic back, and blames the tool. ChatGPT is rubbish. Claude doesn't understand me. AI is overhyped.

But the tool did exactly what was asked. The problem is that "write me a marketing email" isn't a real instruction. It's a gesture in the direction of an instruction. It contains no information about the audience, the product, the tone, the goal, the constraints, the context, or the definition of success. It's the equivalent of walking into an architect's office and saying "build me a house" and being annoyed when they ask follow-up questions.

Note

The gap between a mediocre prompt and a brilliant one is almost never about syntax. It's about how clearly the person has thought about what they want before they start typing.

This is why prompt templates and cheat sheets only get you so far. They solve the wrong problem. They optimise the packaging when the issue is the contents. You can format a prompt beautifully — role, context, task, constraints, output format — and still get a useless result, because you haven't actually figured out what you're trying to achieve.

The fix isn't a better template. It's clearer thinking.

Decompose the problem. Define the outcome. Know what good looks like. Understand your constraints. Then the prompt practically writes itself.

Five principles, not five tricks

I'm not going to give you a list of "top prompt hacks." The internet has enough of those, and most of them are outdated within months as models improve. Instead, here are five principles that have held true across every model I've used — from Claude to ChatGPT to Gemini — and that I suspect will hold true for years to come. Because they're not about the technology. They're about the communication.

Clarity of intent. What do you actually want? Not roughly. Exactly. "Help me write an email" is not clear intent. "Write a 200-word email to existing customers announcing our new pricing tier, emphasising the value for small teams, in a warm but professional tone" — that's clear intent. The difference isn't length. It's specificity. You have to know what you want before you can ask for it. That sounds obvious. It's the thing most people skip.

Relevant context. What does the model need to know that it doesn't? Your audience. Your brand voice. Your constraints. The thing you tried last time that didn't work. The fact that this is for a UK market, not a US one. Models are extraordinary at generating content, but they're working in the dark unless you switch the lights on. Every piece of relevant context you provide makes the output measurably better. The key word is relevant — not everything you know, but everything the model needs to know to do this specific task well.

Examples. Show, don't just tell. One good example of what you want is worth a paragraph of description. If you want a certain tone, paste a paragraph that has that tone. If you want a certain format, show the format. Humans learn by example. Models do too — arguably better than humans, because they won't add their own interpretation. They'll pattern-match to what you showed them. Use that.

Structure. Break complex tasks into steps. Not because the model can't handle complexity — modern models handle remarkable complexity — but because decomposition forces you to think clearly about what you're asking. "Analyse my business" is one step that's actually thirty steps. "First, identify the three biggest revenue risks in my business based on this data, then for each risk suggest two mitigation strategies" — that's a prompt that will give you something useful. Structure isn't about dumbing things down for the machine. It's about thinking clearly enough to know what you're actually asking.

Iteration. The first prompt is never the best prompt. This is perhaps the most important principle and the one most people ignore. They treat prompting like a Google search — type something in, get a result, accept it or reject it. But the real power of AI is conversational. The first response shows you what the model understood. The second prompt refines that understanding. The third gets you close. The fourth nails it. The conversation is the prompt. Each exchange sharpens the output. The people who get extraordinary results from AI aren't writing better first prompts. They're having better conversations.

These five principles won't go stale. They won't be undermined by the next model release. Because they're not about how AI works. They're about how thinking works.

The generalist advantage

Here's where the story gets interesting — and personal.

When people hear "prompt engineering," they assume it's a technical skill. Something for developers. Something that requires a computer science degree and a working knowledge of transformer architectures. The job postings reinforce this: "prompt engineer wanted, must have experience with Python, API integration, machine learning fundamentals."

It's wrong. Almost entirely wrong.

The skills that make someone genuinely good at communicating with AI are not technical skills. They're the skills of a good writer. A clear thinker. A person who can take a complex, messy, ambiguous situation and break it down into structured, communicable parts. Writing. Teaching. Design thinking. The ability to see a problem from multiple angles and express each angle precisely.

I studied Design Technology, Geography, and English for my IB — the International Baccalaureate. Not computer science. Not engineering. Not mathematics. Three subjects that, on the surface, have nothing to do with artificial intelligence.

But Design Technology taught me to think in systems. To start with user needs, not solutions. To prototype, test, and iterate. Geography taught me that everything is connected — economics to climate to politics to human behaviour — and that understanding any complex system requires holding multiple perspectives simultaneously. English taught me to write clearly, to read critically, to understand that the way you say something is inseparable from what you say.

Key Insight

I spent twenty years being told to specialise. That generalists were dilettantes. That breadth was a consolation prize for people who couldn't go deep. It turns out that breadth might be the most valuable thing you can bring to a conversation with a machine that already has all the depth in the world.

The mediocre generalist — the person who knows a bit about everything and not enough about anything — suddenly finds that this exact combination is what the moment demands. Because AI can go deep on any topic instantly. What it can't do is think across domains. It can't see the connection between a design principle and a business strategy and a human emotion unless you draw that connection for it. And drawing connections across domains is exactly what generalists do.

I've watched teams struggle with AI tools — not because the tools are bad, but because nobody stopped to think about what they actually needed. The developer writes technically perfect prompts that miss the business context. The marketer writes creative prompts that lack structural rigour. The manager writes prompts that are too abstract to be actionable. The person who does it well? Usually the one who can think across all three modes. The one who understands the technology well enough, the business well enough, and the communication well enough to sit in the middle and translate between them.

That's not a technical skill. That's a liberal arts education earning its keep.

The prompt is dying. Prompting is not.

Here's the part where I'm supposed to tell you that prompt engineering is the career of the future and you should start a course and get certified.

I won't, because it's not quite true.

The "prompt engineer" as a standalone job title is already fading. When the role emerged in 2023, models were brittle. They needed careful instruction. You had to know the right incantations — the specific phrases and structures that would coax a good response out of GPT-4 or Claude. It felt like a dark art, and for a brief window, the people who'd mastered it were in extraordinary demand.

That window is closing. Models are getting better at understanding imprecise instructions. They're more forgiving of vague prompts. They ask clarifying questions. They infer context. The gap between a carefully engineered prompt and a casual one is narrowing — not because the careful approach stopped being better, but because the floor is rising.

And the shift toward agentic AI — systems that don't just respond to prompts but take autonomous action toward goals — is changing the paradigm entirely. You're no longer writing individual prompts. You're defining objectives, setting guardrails, and delegating. It's closer to managing a team than typing commands.

But here's what hasn't changed, and won't: the underlying skill. The ability to articulate what you want. To think clearly about outcomes. To communicate with precision. To decompose a complex goal into actionable parts. To iterate based on feedback.

The syntax of prompting will keep evolving. The frameworks will change. The tools will handle more and more of the mechanical work. But the thinking — the hard, human work of knowing what you want and why you want it — that doesn't get automated. Not by GPT-5. Not by agentic systems. Not by anything on the roadmap.

The machines are getting smarter. The question is whether we're keeping up.

What Promptology actually means

I called this site Promptology for a reason. Not because prompts are the most important thing in AI — they're not. And not because I think everyone needs to become a prompt engineer — they don't.

I chose the name because the discipline of crafting a good prompt mirrors the discipline of thinking well. A prompt is just a question, and the quality of any question depends on the quality of the thinking behind it. How clearly have you defined the problem? How honestly have you assessed what you know and what you don't? How precisely can you articulate the gap between where you are and where you want to be?

These aren't AI skills. They're human skills. The oldest, most important, most persistently undervalued skills we have.

Promptology, to me, is the study of that intersection. Where human thinking meets machine capability. Where clarity of thought becomes clarity of instruction becomes quality of output. Where the liberal arts and the artificial intelligence converge, and the generalist — the person who was always told they were spread too thin — discovers that breadth was preparation, not distraction.

This isn't a site about AI tools, though we'll talk about tools. It isn't a site about productivity hacks, though you'll become more productive if you read carefully. It's a site about thinking clearly in an age that rewards clarity more than any age before it.

Because for the first time in history, clear thinking has a direct, measurable, immediate return. Think clearly, prompt clearly, and a machine will execute at a level that would have required a team of specialists five years ago. Think vaguely, prompt vaguely, and you'll get the same mediocre output that makes people say AI is overrated.

The difference is you. It was always you.

Start here

The next time you sit down with an AI — any AI, any tool, any model — and feel that familiar frustration when it doesn't give you what you wanted, pause.

Before you rewrite the prompt, rewrite your thinking.

What do you actually want? Not vaguely. Specifically. Why do you want it? For whom? In what format? With what constraints? What does good look like? What does bad look like? What context would a brilliant human assistant need to do this task perfectly — and have you provided that context?

Answer those questions clearly and the prompt will take care of itself.

That's not a hack. It's not a framework. It's not a secret the prompt engineering gurus don't want you to know.

It's just thinking. Done properly. At last.