On this page
Writing
Four Ways People Actually Use AI at Work
There is a lot of noise about what AI can do. Here is a simple map of what people are actually doing with it — across every profession, skill level, and industry.
TLDR
- There are four main ways people use AI at work: writing and communication, research and summarisation, thinking and decision-making, and getting AI to complete tasks end-to-end.
- These are categories, not a progression. You do not need to work through all four. Each one produces real value on its own.
- Most people start with writing or research without even realising it. The more sophisticated uses follow naturally once you have built the intuition.
- If you already use AI regularly and want to understand what separates good AI users from the rest, the previous post covers the skill that actually matters.
There is a lot of noise about AI. What it will change. What it will replace. How powerful it is becoming.
What I find missing from most of that conversation is something more basic: what are people actually doing with these tools, right now, in real work situations?
This post is an attempt to answer that question directly. After spending a year using AI seriously across product work, writing, research, and building small tools, I have noticed that most practical use falls into four broad categories.
They are not a hierarchy. A nurse using AI to draft patient handover notes is doing something just as legitimate as an engineer using AI to build an automated workflow. The question is not which category is best — it is which one is most useful to you, today, in the work you actually do.
Category one: writing and communication
This is where most people start, often without thinking of it as AI use at all.
They paste a rough draft into ChatGPT and ask it to clean it up. They ask it to write a first version of an email they are dreading. They use it to shorten a long report into an executive summary, or to transform bullet points into a coherent paragraph.
The reason this is the most common starting point is obvious: almost everyone writes at work, writing takes longer than it should, and AI is genuinely good at drafting.
But the more interesting version of this is not about speed. It is about getting past the blank page.
Most people find that the hardest part of writing is starting — not refining. An AI-generated first draft, even a mediocre one, gives you something concrete to react to. And reacting is much easier than originating. You cut things that are wrong, you correct things that miss the point, you add the judgment and specificity that only you can provide. The total time is shorter. The output is often better than if you had written from scratch, because you were reacting rather than generating.
This applies across professions in ways that are easy to underestimate. A government policy analyst using AI to draft consultation responses. A nurse using it to write handover summaries. A sales manager using it to draft a proposal. A teacher using it to write feedback comments. In each case the expertise is still human — AI is providing the scaffolding.
Category two: research and summarisation
The second category is about processing information faster.
When you need to understand something new — a market, a technology, a regulation, a concept — AI compresses the orientation phase considerably. You can ask it to explain something at whatever level of depth is useful, request a summary of a document, or ask it to extract the three most relevant points from a long piece of text.
This is different from search. Search returns pages that contain answers. AI returns answers directly, in a format tailored to what you asked. The difference in time spent is significant.
I use this most often when I need to get up to speed on something outside my core area. If I am going into a conversation about a technical architecture I am not expert in, I will spend fifteen minutes with AI beforehand so I am not starting from zero. If I need to understand a competitor's pricing model, I will ask AI to give me a structured summary rather than spending an hour reading their website. If I am reviewing a long contract, I will ask AI to extract the clauses that are most likely to matter.
The important caveat — and it is a real one — is that AI can be confidently wrong. It will sometimes hallucinate facts, misattribute quotes, or give you an outdated picture. For anything that matters, you verify. But the research function is still valuable: AI gives you the map, and you use the map to navigate faster even as you remain alert for errors.
Most professionals who use AI regularly describe this as the area where they recover the most time. Research that used to take a full day now takes a morning. Orientation that used to require weeks of reading can happen in an afternoon.
Category three: thinking and decision-making
The third category is less obvious but, in my experience, often the most valuable.
This is using AI as a thinking partner — not to get an answer, but to think through a problem more rigorously.
When I am working through a difficult decision, I will often explain the situation to an AI model and ask it to argue the other side. Or to surface assumptions I might be making. Or to generate three approaches I have not considered. Or to identify where I am most likely to be wrong.
I do not do this because I trust the output. I do it because the process of explaining the problem clearly enough for AI to engage with it forces me to be more precise about what I am actually trying to decide. The act of articulation surfaces gaps in your own thinking faster than almost anything else.
There is a reason good consultants ask so many questions. Explaining something to someone who does not already know it forces you to notice what you have left vague or assumed without checking. AI provides a fast, friction-free version of that pressure.
I have also found this useful for generating structured arguments before a difficult stakeholder conversation. Not to use the output verbatim — it is almost never right for the specific context — but to have considered a wider range of objections than I would have reached on my own.
The failure mode in this category is treating AI outputs as conclusions. They are not. They are prompts for your own thinking. The value comes from the friction of engaging with them, not from accepting them.
Category four: agentic workflows — getting AI to complete tasks end-to-end
The fourth category is where things get more powerful and more technical, and where most people have not yet arrived.
An agentic workflow is when you set up AI to complete a multi-step task independently — not just answer a question, but take a sequence of actions on your behalf.
A simple example: instead of asking AI to help you write a weekly report, you set up a workflow where AI automatically pulls data from a spreadsheet, formats it according to a template you have defined, drafts the report, and drops it into a shared folder. You review and send.
A more advanced example: an AI agent that monitors incoming customer feedback, categorises it by theme, flags anything urgent, and produces a weekly summary for the product team — without anyone manually doing any of those steps.
These setups require more technical confidence. They involve things like APIs, automations, and tools that connect different software together. They also require more upfront thinking, because you are effectively writing instructions for a system rather than having a conversation.
But they are not reserved for engineers. Increasingly there are low-code and no-code tools that allow people with limited technical backgrounds to build simple agentic workflows. The cost of entry is coming down quickly.
The reason this category matters is not just speed — it is scale. Writing assistance makes you faster at a task you were already doing. Agentic workflows let you take tasks off your plate entirely. That is a different kind of leverage.
Most people who get here do not start here. They work through the earlier categories first, build intuition about what AI is reliable enough to do unsupervised, and then gradually give it more autonomy in well-defined, lower-stakes situations.
A note on how these relate to each other
These categories are not a ladder. You do not have to climb from writing to research to thinking to agents in sequence. They are parallel options, each valid depending on what you are trying to do.
Most people settle into two or three that are most relevant to their work and go deep there rather than spreading across all four. A lawyer might use categories one and three almost exclusively. A data analyst might spend most of their time in category two. An operations manager might eventually build category four workflows for their most repetitive processes.
What matters is not how many categories you are using. What matters is whether the ones you are using are actually saving you time, improving the quality of your thinking, or both.
My current view
The thing I find most striking, having watched how different people start using AI at work, is how much the initial use case shapes everything that follows.
People who start by using AI for writing tend to develop a strong feel for how to direct it. People who start with research tend to get good at knowing what to verify. People who start with thinking tend to develop a more experimental relationship with the tools.
The entry point matters less than the habit of use. The intuition you need to use AI well — knowing what to trust, what to check, when to push for more, when to take the output and move on — mostly comes from doing it repeatedly, not from reading about it.
Pick the category that is most immediately relevant to your work and start there. The rest will become clearer as you go.
This is part of a series on working effectively with AI. Start with AI Won't Replace Your Job — But Someone Using AI Might if you have not already, or jump to The Skill That Matters Most in the AI Era: Problem Framing if you want to go deeper on what separates useful AI from disappointing AI.
Related posts
AI Won't Replace Your Job — But Someone Using AI Might
The debate about whether AI will replace jobs misses the more important question: what changes for the people who actually learn to use these tools well?
The Skill That Matters Most in the AI Era: Problem Framing
Prompt engineering had its moment. Context engineering came next. The thing that still matters most — and the one thing AI can't do for you — is knowing what you actually want.