Think in Pieces
The skill is not solving the problem. It is slicing it. And it applies to far more than code.
There is a paradox hiding inside the most powerful tools we have ever built. The more capable an AI agent becomes, the more it needs us to think small.
Not small as in unambitious. Small as in specific. Bounded. Concrete enough that a system which has no intuition about what you meant can execute without guessing. We have tools that can work autonomously for hours, even days. Opus 4.6, Gemini 3.1 Pro, GPT 5.3 Codex: they will accept a task in the morning and return finished work by evening. The catch is that they will do precisely this whether the task was well-defined or not. They do not stop to tell you the assignment was unclear. They simply build something, confidently and completely, and leave you to discover the misalignment afterward.
The difference between someone who uses these tools well and someone who does not is rarely intelligence, experience, or even technical skill. It is something quieter: the ability to break a large ambition into pieces small enough to carry.
This is decomposition. And it may be the most underrated skill in the current economy.
The Whole Is Not Greater. It Is Unmanageable.
Researchers studying large language models found something that should change how we all work. When a model is given a problem and asked to decompose it into steps before solving, breaking the reasoning into a tree of sub-problems, it dramatically outperforms the same model attempting to solve the problem whole. The technique is called tree-of-thought prompting, and the results are not marginal. They are substantial.
This finding mirrors what the best project managers and architects have always known: a plan is not a single large intention. It is a sequence of small, verifiable commitments. The reason most ambitious projects stall is not that the ambition is wrong. It is that nobody translated the ambition into pieces that can actually be executed, checked, and corrected independently.
What changed is the cost of getting this wrong. When execution was slow and expensive, poor decomposition meant delays and waste. When execution is fast and autonomous, poor decomposition means an agent running at full speed in the wrong direction for six hours, producing a coherent, well-structured monument to a misunderstood requirement. The feedback loop is no longer slow. It is absent.
What Decomposition Actually Looks Like
This skill is easier to recognize in practice than to describe in theory. Here is the difference.
A vague directive sounds like this: “Build a customer onboarding system that integrates with our existing tools and handles edge cases gracefully.” That sentence contains at least six decisions, four ambiguities, and zero boundaries. Handed to an autonomous agent, it will produce something. That something will reflect the agent’s interpretation of “gracefully,” its guess about which existing tools, and its own definition of edge cases. You will spend more time understanding what it built than it spent building it.
The same ambition, decomposed, sounds different. First: define the data model for a new customer record, including these five fields, constrained to these types. Second: write an API endpoint that creates a customer record and returns a confirmation, with validation for email format and duplicate detection. Third: build a webhook integration with the CRM, triggered on successful creation, with retry logic for failures. Each piece is specific enough that an agent can execute it independently, and each piece is small enough that you can verify the result in minutes rather than hours.
The work did not get simpler. It got clearer. That is the entire point.
Vertical, Not Horizontal
There is a specific trap that even experienced teams fall into. AI agents naturally produce horizontal plans: all the database work first, then all the services, then all the API endpoints, then all the frontend. Twelve hundred lines of implementation later, nothing is testable, nothing is integrated, and the first user-facing feature is still theoretical.
Teams at HumanLayer tried everything to stop this pattern. Different models, different prompts, extensive evaluation. The models kept producing horizontal plans regardless. The fix was not better prompting. It was better decomposition by the human: forcing vertical slices. Mock the API endpoint first, get it working in the frontend, then wire the real service behind it, then do the migration. Same scope, but with checkpoints where you can verify correctness along the way.
This is the difference between a plan that is logically organized and a plan that is operationally useful. Models are excellent at the first. Humans need to supply the second.
This Is Not a Coding Skill
Here is where the conversation tends to narrow, and it should not. Decomposition is not about software engineering. It is a thinking discipline that applies to any domain where AI agents are doing the work.
A product manager decomposing a quarterly roadmap into agent-executable research tasks is practicing this skill. A marketing director breaking a campaign launch into discrete content briefs, each with its own audience and success criteria, is practicing it. A founder translating a company vision into a set of bounded experiments, each testable in a week, is practicing it.
The skill is the same everywhere: can we take a large intention and express it as a sequence of bounded, verifiable, independently executable pieces? The tools do not care whether the domain is software, marketing, strategy, or logistics. They care whether the task is clear.
The 10x Gap, and Why It Is Widening
Nate B. Jones recently made an observation that deserves more attention than it received. Prompting, which we used to treat as a single skill, has quietly split into at least four distinct capabilities. Decomposition is one of them. And the gap between people who have developed it and people who have not is already an order of magnitude.
This is not hyperbole. In teams that use autonomous agents regularly, the decomposers consistently produce work that is reviewed, approved, and shipped. The non-decomposers consistently produce work that is reviewed, reworked, and eventually rewritten (or handed back to the same agent, this time with a proper brief). The time difference is closer to ten times, because the decomposer’s work compounds: each well-scoped piece builds on the verified output of the previous one, while the non-decomposer is perpetually starting over from a failed whole.
The gap is widening because the tools are getting better at execution, which means they are getting better at amplifying both good decomposition and bad. A more capable agent given a well-scoped task produces better work faster. The same agent given a vague mandate produces more impressive nonsense faster. The leverage cuts in both directions.
Learning to Slice
The reassuring truth is that decomposition is learnable. It is not a gift. It is a discipline, and like most disciplines, it improves with practice and a few concrete habits.
The first habit is to resist the instinct to describe the outcome before describing the steps. When we hand a task to an agent, the temptation is to describe the finished product: “build me a dashboard that shows X, Y, and Z.” The better approach is to describe the first step only: “create a data source that returns X.” Then verify. Then proceed to Y. The ambition stays large. The instructions stay small.
One team learned this the hard way. They started with a three-stage workflow (research, plan, implement) governed by a single prompt with 85 instructions. Half their users got poor results. They decomposed the process itself into seven smaller stages, each with under 40 instructions, each producing an artifact that feeds the next. Better research, better plans, dramatically better code. The process needed the same decomposition the tasks did.
The second habit is to make each piece independently verifiable. If we cannot check whether a piece succeeded without seeing the whole, the piece is too large. Good decomposition produces pieces that have their own success criteria, their own boundaries, their own definition of done. A well-sliced task can be evaluated in isolation, and that is what makes the whole reliable.
The third habit, and perhaps the hardest to develop, is to sequence for dependency, not for importance. We naturally want to start with the most exciting part, the user interface, the big feature, the visible result. But agents, like building foundations, need the less glamorous pieces first: the data model before the API, the API before the interface, the interface before the polish. The sequence that feels least satisfying to plan is usually the one that produces the most reliable result.
None of these habits require technical knowledge. They require patience, specificity, and a willingness to think about the work before starting the work. Which, it turns out, is the oldest form of wisdom dressed in the newest clothing.
This is part of “The Hitchhiker’s Guide to the K-Shaped Economy.” Previous: “The Taste Gap” on judgment. Next: on orchestration.