On this page
Writing
AI Won't Replace Your Job — But Someone Using AI Might
The debate about whether AI will replace jobs misses the more important question: what changes for the people who actually learn to use these tools well?
TLDR
- AI does not replace expertise. It multiplies output for people who already know how to think.
- The biggest gains come from using AI for drafting, research, and stress-testing decisions — not from treating it as an expert.
- The skill that matters is not prompt engineering. It is problem framing.
- If you already know all of this, the next article walks through the specific workflows where AI saves the most time: How I Use AI as a Product Manager (coming soon)
The question I hear most often about AI at work is a binary one: will this replace us or not?
I think that is the wrong question. Or at least, it is not the most useful one to spend much time on.
A more useful question: what actually changes for the people who learn to use these tools well?
The real effect is leverage, not replacement
My working hypothesis, after using AI tools seriously over the past year or so, is this: the primary effect of AI on knowledge work is not replacement. It is leverage.
AI does not replace the need to understand a problem, form a view, or make a judgment call. What it does is reduce the time and effort needed for certain kinds of work — drafting, summarising, researching, structuring ideas, stress-testing decisions — so that a person who uses it well can do significantly more than one who does not.
The people who figure this out early are not in a different category of intelligence. They are operating at a different level of output.
What that actually looks like
A few concrete examples from how I use AI in product work:
Writing first drafts. A product spec that might take two hours to structure from scratch now takes thirty minutes. But the leverage is not just in the drafting speed — it is also in figuring out the right shape before you start writing.
Most organisations respond differently to different types of specs. Some need a tight one-pager. Others need a formal requirements doc. A one-size-fits-all template usually ends up feeling laborious to whoever has to fill it in, which means they do not fill it in properly, which defeats the whole purpose. A product spec is ultimately a tool for communicating clarity. If nobody reads it, no clarity is being communicated.
When I am not sure which format will actually land, I will often have a quick brainstorm with AI first — describe the team, the context, the kind of decision we are trying to make — and let it surface a few different approaches. Then I can apply my own judgment about what fits the situation, and use AI to help draft from there.
Research and summarisation. When I need to understand a topic quickly — a market dynamic, a competitor's approach, a technical concept — AI can compress hours of reading into a useful starting map. It is not always accurate, and I still verify things that matter. But the orientation phase of research is much faster.
Decision stress-testing. When I am working through a difficult call, I will explain the situation to an AI model and ask it to argue the other side. Not because I trust the output to be right, but because it surfaces objections I might have glossed over. It is a fast, cheap way to pressure-test your own thinking.
None of these replace judgment. They reduce the cost of doing the work around judgment.
What AI is genuinely bad at
The best analogy I have come across: working with AI is like managing a team of very eager graduates fresh out of university.
They have enormous energy. They will execute on your task quickly and without complaint. They want to help. But they are not experts. They do not yet have the industry experience, the professional judgment, or the implicit understanding of what "good" actually looks like in your specific context.
As the senior on the team, your job is to provide all of that. You define the goal clearly. You explain the task. You show them what good output looks like. You course-correct when they go off track. Once you do that work upfront, they become genuinely fast and capable hands.
The failure mode is treating AI as an expert rather than as a capable executor. If you hand a vague brief to a fresh graduate and say "figure it out," you will usually get something that misses the point. The same is true here. The quality of the output is mostly a function of the quality of the direction.
Which brings us to the actual skill.
The skill that actually matters
The popular term is "prompt engineering," which makes it sound more technical than it is.
The real skill is closer to problem framing: being clear enough about what you are trying to do, what the constraints are, and what good looks like, so that you can direct the tool usefully and evaluate what comes back.
That is not a new skill. It is basically the same skill that makes you good at briefing a colleague, writing a clear spec, or structuring a meeting agenda. AI just gives you a much faster feedback loop.
People who are already clear thinkers tend to get more from AI tools faster. People who are vague in their thinking will get vague outputs. The tools do not fix that problem — they often surface it faster.
My current view
I do not think the question "will AI replace my job?" is worth spending much energy on. The outcome is mostly not in your control.
The more useful frame: what does someone who is genuinely good at using AI look like in your field, and what would it take to become that person?
In most knowledge-work professions, the answer is probably not that different from what makes anyone good at their job: clear thinking, sound judgment, and knowing when to trust a tool and when to verify it.
AI changes the leverage ratios. The underlying skill stack is largely the same.
This is the foundation. If you are further along and want to understand how AI agents change this picture — where the graduates start to look more like specialists — that is a later conversation.
Related posts
Four Ways People Actually Use AI at Work
There is a lot of noise about what AI can do. Here is a simple map of what people are actually doing with it — across every profession, skill level, and industry.
The Skill That Matters Most in the AI Era: Problem Framing
Prompt engineering had its moment. Context engineering came next. The thing that still matters most — and the one thing AI can't do for you — is knowing what you actually want.