On this page
Writing
The Skill That Matters Most in the AI Era: Problem Framing
Prompt engineering had its moment. Context engineering came next. The thing that still matters most — and the one thing AI can't do for you — is knowing what you actually want.
TLDR
- The AI landscape has evolved from prompt engineering to context engineering to environment engineering — but the underlying question is always the same: how do you set up conditions where the model does its best work?
- AI can now help you with prompt quality and context structure. You can literally ask the model to improve your own prompt.
- The one thing AI cannot do for you is define what you actually want. That is still a human job.
- Beginners do not need to start at the power-user level. Basic usage produces real value. The rest comes with exposure.
- If you want to go deeper on the specific workflows where AI saves the most time: How I Use AI as a Product Manager (coming soon)
When AI tools first became genuinely useful, the dominant advice was about prompt engineering.
Use the right phrasing. Add "think step by step." Frame it as a role. Use the right separators. Chain your reasoning.
Then came context engineering — understanding that the prompt itself was only part of what mattered. The model needed the right surrounding information: background, constraints, prior decisions, format expectations. Flood it with too much and it gets confused. Give it too little and it guesses.
Now the conversation has moved again. People building more advanced AI setups talk about the full environment: the right tools, the right memory, progressive disclosure so the model is not overwhelmed with options it does not need yet.
In other words, the framing keeps evolving. But the underlying question stays the same:
How do you set up conditions where the model can do its best work?
That question — and how you answer it — is what separates useful AI from disappointing AI.
Here is the thing most people miss early on
When I started using AI regularly, I assumed the quality of what I got back was mostly a function of the model itself.
Better model, better output. Made sense.
What I actually found was more uncomfortable.
The thing that most consistently predicted output quality was the clarity of my own input. Not technically — the prompts were coherent enough. But the thinking behind them was often fuzzy:
- I had not said who this was for
- I had not distinguished what I wanted from what I actually needed
- I had not named the real constraint
- I had not thought about what a good answer would look like
The model returned something reasonable. It was just generic, slightly off-target, and required several rounds of correction to become useful.
Eventually I realised: AI is quite good at solving clearly defined problems. It is much less useful for problems that have not been defined yet.
That is not a flaw in the model. It is a reflection of the input. The output quality is often a reasonably accurate signal of how clearly you have thought about the problem going in.
The part that surprised me: AI can help with most of this
Here is something I found that changed how I think about this.
Prompt engineering — the craft of structuring inputs well — is something AI has become quite good at helping with.
If you are not sure your prompt is clear, you can ask: "Can you rewrite this prompt to be more specific and useful?" or "What information are you missing to give me a better answer?"
The model will often do a better job editing your prompt than you would by trial and error.
The same is increasingly true for context management. If you have been running a long conversation and things are getting muddled, you can ask the model to summarise what it knows and flag what would help it most. With persistent setups — markdown files, structured notes — you can ask AI to audit and compact that context for you.
In other words, a lot of the mechanical work of using AI well can be delegated back to the AI itself.
This is a genuinely useful thing to know early. You do not need to become an expert before you start. You can use the tool to help you get better at using the tool.
What that leaves
Once you accept that AI can help with prompt quality, context structure, and environment optimisation, the question shifts.
What is the thing AI cannot do for you?
The answer is: knowing what you actually want.
Not in a vague, philosophical sense. I mean it practically.
Before a model can help you effectively, something has to be decided:
- What is the actual problem here?
- Who does it affect, and in what situation?
- What does a good answer need to do?
- What constraints actually matter?
AI can help you refine how you express those things once you have a rough draft of them. It cannot generate them from nothing. If you ask a model to help you decide what problem is worth solving, it will produce something plausible — but it will be working from prior patterns, not from your specific situation.
That first step — defining what you are actually trying to figure out — is where human judgment is still genuinely irreplaceable.
A concrete example of the difference
Here is what most people type:
"Write a product spec for a notifications feature."
The output will be competent and completely generic. It will cover the obvious things. It will not be wrong. But it will not be about your product, your users, or your actual problem.
Here is the same request after spending three minutes on the problem first:
"I'm a product manager working on a B2B tool used by operations teams. Users miss critical alerts because the current system doesn't distinguish between routine updates and urgent problems. I need a spec for a notifications feature that helps users triage urgency quickly without creating alert fatigue. Constraints: can't require significant workflow changes, small engineering team so scope must be tight. Generate three possible approaches."
Same AI. Same model. Completely different output.
The second prompt is not better because of clever phrasing. It is better because the problem has been defined.
If you are just getting started, here is what actually matters
None of this means you need to have everything figured out before AI becomes useful to you.
The commercial AI tools available now are genuinely good. Even basic usage — just asking questions in plain language, iterating on drafts, using it as a thinking partner — produces real value. You do not need to understand context windows or agent frameworks to get started.
The progression tends to look something like this, and most people go through it naturally just by using the tools regularly:
- You start prompting. Things are sometimes useful, sometimes off. You start noticing patterns.
- You realise you get better results when you give more context. You start adding background information.
- You discover you can ask AI to improve your own prompts. That shortcut saves a lot of time.
- Eventually you learn about context engineering, persistent memory, and tool selection — but by then the vocabulary makes sense because you have seen the problems they solve.
There is no need to skip ahead. The hill is not that steep, and the early steps are where you build the intuition that makes everything else click.
The one thing worth developing early — because it underpins all the rest — is the habit of pausing before you type and asking: what am I actually trying to figure out here?
That habit does not require any technical knowledge. It just requires a few seconds of honest thinking.
My current view
Prompt engineering had its moment. Context engineering is still live, but tools increasingly help you manage it. The environment question will keep evolving.
What I keep coming back to is that the skills with the longest shelf life are the ones that predate AI entirely:
- defining problems clearly
- distinguishing between what you want and what you need
- being specific about constraints and success criteria
- knowing when you have enough to act and when you need more information
AI amplifies those skills. It does not replace them. And if you do not have them yet, regular use of AI will accelerate them — because every time the output is off, the fastest path to fixing it is asking why the input was not clear enough.
Stay curious. Start where you are. The rest follows.
This is part of a series on working effectively with AI. The first post covers why AI is a multiplier, not a replacement.
Related posts
Four Ways People Actually Use AI at Work
There is a lot of noise about what AI can do. Here is a simple map of what people are actually doing with it — across every profession, skill level, and industry.
AI Won't Replace Your Job — But Someone Using AI Might
The debate about whether AI will replace jobs misses the more important question: what changes for the people who actually learn to use these tools well?