Skip to content

The resultant

You learned this in high school physics: an object's motion is never determined by a single force. It is the resultant of every force acting on it.

How fast a car goes is not just about the engine. Road friction, air resistance, tire grip, transmission efficiency — they all shape the outcome. A powerful engine on bald tires is just noise and heat.

Agentic systems work the same way.

When you plug an LLM into tools, feed it context, wrap it in a runtime framework, and deploy it inside a sandbox, the system's final performance — how complex a problem it solves, how reliably it finishes a task, how autonomously it operates — is the resultant of all these forces, not the solo act of any one.

Model capability is one force. It sets the ceiling on the system's intelligence — reasoning depth, knowledge breadth, instruction-following precision.

Everything you do at the harness layer — how you arrange context, design tool interfaces, manage runtime state, isolate execution environments, validate outputs — is another force. It determines how much of that ceiling gets realized in practice, and whether the process is safe, controllable, and sustainable.

Two forces, acting on the same system. The resultant determines what comes out.

This leads to a question — a simple-sounding one that will reshape your entire engineering strategy:

The question

Where should your force point?

Point it wrong, and every ounce of effort you invest gets quietly eaten by the other force's growth. Point it right, and the two forces do independent work — neither diminishes the other, and the system keeps getting stronger.

"Pointing it right" has a precise name in physics. But before we get to that word, we need to understand the force we cannot control — the model itself — and what it actually is.