In the previous post, I talked about how "you can only direct others once you've done it yourself." The next question remains: so how exactly do you give directions?
When I delegate work to an agent, the results often don't match my expectations. Looking back, it was rarely a technical limitation. Most of the time, the problem was on my end. What ultimately determines the outcome is how clearly the blueprint is drawn in my own head.
When the blueprint is sharp, you can see where the gaps are. You immediately notice when the agent's "defaults" diverge from your intent. When the blueprint is hazy, you don't even know what to specify, and you can't judge whether the result is right or wrong.
Before handing execution to the agent, I've started checking three things.
- Where are we right now? Without knowing the current code structure and dependencies, you can't verify whether the agent's plan even makes sense. If I'm unsure, I have the agent analyze the current state first. Since that's something I can evaluate, I can tell whether it's right or wrong. No matter how large the context window gets, there's no guarantee the agent truly understands existing code. It sometimes mistakes existing code descriptions for new specs and builds duplicates. That's why I need to understand the current state first—so I can catch these errors early.
- Is this the kind of thing you only understand by doing? Sometimes I read a spec and think I get it, but implementation reveals gaps the spec never addressed. It could be a development concern or a planning concern. If it only surfaces through doing, I don't hand off the whole thing—I hit the gaps first, then delegate the rest.
- Can this be executed as-is? Even when specs and designs are "finalized," they don't ship unchanged. Agents don't really question what they're given. They quietly fill in gaps. So before handing off, I scan for big-picture concerns myself and have the agent enumerate edge cases. I take that list, align with the PM or designer, then proceed.
These three checks are ultimately the process of making the blueprint sharp.
Even with a sharp blueprint, new gaps emerge when you try to communicate it to the agent.
Say we've agreed to build a card UI. The direction is right. But details like what to do when text overflows, or what to show on hover, often exist only in my head. The agent fills those gaps its own way. The probability that its defaults match the picture in my head is low.
But specifying everything means the cost of writing the query exceeds the cost of just building it yourself. So you need to be selective about what to communicate. I think in two categories:
- Hard constraints that must be followed
- Areas where the agent can use its judgment
In intent engineering, these are called hard constraints and steering constraints. The distinction between absolute limits that must never be crossed, and directional guidance where specific choices are left to the agent. The higher the cost of reverting, the more it exists only in my head, or the harder it is to spot when wrong—the more it leans toward a hard constraint.
And context that repeats every time should be baked into the system, not the query. Stack defaults into skills and system prompts, and queries only need to convey exceptions. That said, you don't need to front-load too much. Models keep getting stronger—what needed explicit specification six months ago may be unnecessary now. It's better to build up incrementally and keep only what's needed.
Design works similarly. Recurring things like spacing units and color palettes go into the system as design tokens. States that aren't visually obvious, or design intent that's hard to capture in mockups—have the agent enumerate those first, then I review and correct. Much lighter than writing specs from a blank canvas.
Everything above is what you can do before execution. But humans can't plan and predict perfectly. No matter how thorough the spec, unexpected things surface during implementation. Specs aren't documents you write once and finish—they're living documents that keep getting updated as implementation reveals new things.
So I don't hand off execution all at once. I break it into units small enough to review comfortably, checking results and adjusting after each one. Rather than "define everything upfront and execute in one shot," maintaining control through small, iterative steps is more realistic. When recurring patterns emerge, I bake them into the system. The next task's blueprint gets sharper.
Ultimately, this is about what to delegate and what to do yourself. Just as a good manager knows "where in this task my judgment needs to be involved" and redraws that boundary every time, collaborating with agents means resetting the boundaries of delegation for each task.
Building on the premise from the previous post—"you can only direct others once you've done it yourself"—it's now about designing specifically how to direct. Making the blueprint in my head sharp, and communicating that clarity to the agent.