higher fashions, bigger context home windows, and extra succesful brokers. However most real-world failures don’t come from mannequin functionality — they arrive from how context is constructed, handed, and maintained.
It is a laborious drawback. The area is transferring quick and methods are nonetheless evolving. A lot of it stays an experimental science and will depend on the context (pun meant), constraints and atmosphere you’re working in.
In my work constructing multi-agent programs, a recurring sample has emerged: efficiency is way much less about how a lot context you give a mannequin, and much more about how exactly you form it.
This piece is an try to distill my learnings into one thing you should utilize.
It focuses on rules for managing context as a constrained useful resource — deciding what to incorporate, what to exclude, and tips on how to construction info in order that brokers stay coherent, environment friendly, and dependable over time.
As a result of on the finish of the day, the strongest brokers are usually not those that see probably the most. They’re those that see the precise issues, in the precise kind, on the proper time.
Terminology
Context engineering
Context engineering is the artwork of offering the precise info, instruments and format to an LLM for it to finish a activity. Good context engineering means discovering the smallest attainable set of excessive sign tokens that give the LLM the very best likelihood of manufacturing end result.
In follow, good context engineering often comes right down to 4 strikes. You offload info to exterior programs (context offloading) so the mannequin doesn’t want to hold every part in-band. You retrieve info dynamically as a substitute of front-loading all of it (context retrieval). You isolate context so one subtask doesn’t contaminate one other (context isolation). And also you cut back historical past when wanted, however solely in ways in which protect what the agent will nonetheless want later (context discount).
A typical failure mode on the opposite facet is context air pollution: the presence of an excessive amount of pointless, conflicting or redundant info that it distracts the LLM.
Context rot
Context rot is a state of affairs the place an LLM’s efficiency degrades because the context window fills up, even whether it is throughout the established restrict. The LLM nonetheless has room to learn extra, however its reasoning begins to blur.
You’d have seen that the efficient context window, the place the mannequin performs at prime quality, is usually a lot smaller than what the mannequin technically is able to.
There are two components to this. First, a mannequin doesn’t keep good recall throughout it’s complete context window. Data at the beginning and the top is extra reliably recalled than issues within the center.
Second, bigger context home windows don’t remedy issues for enterprise programs. Enterprise knowledge is successfully unbounded and incessantly up to date that even when the mannequin might ingest every part, that might not imply it might keep a coherent understanding over it.
Identical to people have a restricted working reminiscence capability, each new token launched to the LLM depletes this consideration funds it has by some quantity. The eye shortage stems from architectural constraints within the transformer, the place each token attends to each different token. This results in a n² interplay sample for n tokens. Because the context grows, the mannequin is pressured to unfold its consideration thinner throughout extra relationships.
Context compaction
Context compaction is the final reply to context rot.
When the mannequin is nearing the restrict of it’s context window, it summarises it’s contents and reinitiates a brand new context window with the earlier abstract. That is particularly helpful for lengthy operating duties to permit the mannequin to proceed to work with out an excessive amount of efficiency degradation.
Latest work on context folding presents a unique method — brokers actively handle their working context. An agent can department off to deal with a subtask after which fold it upon completion, collapsing the intermediate steps whereas retaining a concise abstract of the result.
The problem, nonetheless, will not be in summarising, however in deciding what survives. Some issues ought to stay secure and almost immutable, corresponding to the target of the duty and laborious constraints. Others could be safely discarded. The problem is that the significance of knowledge is usually solely revealed later.
Good compaction subsequently must protect details that proceed to constrain future actions: which approaches already failed, which information had been created, which assumptions had been invalidated, which handles could be revisited, and which uncertainties stay unresolved. In any other case you get a neat, concise abstract that reads effectively to a human and is ineffective to an agent.
Agent harness
A mannequin will not be an agent. The harness is what turns a mannequin into one.
By harness, I imply every part across the mannequin that decides how context is assembled and maintained: immediate serialization, device routing, retry insurance policies, the principles governing what’s preserved between steps, and so forth.
When you take a look at actual agent programs this fashion, numerous supposed “model failures” now look totally different. I’ve encountered lots of such at work. These are literally harness failures: the agent forgot as a result of nothing continued the precise state; it repeated work as a result of the harness surfaced no sturdy artefact of prior failure; it selected the incorrect device as a result of the harness overloaded the motion area; and so forth.
harness is, in some sense, a deterministic shell wrapped round a stochastic core. It makes the context legible, secure, and recoverable sufficient that the mannequin can spend its restricted reasoning funds on the duty somewhat than on reconstructing its personal state from a messy hint.
Communication between brokers
As duties get extra complicated, groups have defaulted in the direction of multi-agent programs.
The error is to imagine that extra brokers means extra shared context. In follow, dumping an enormous shared transcript into each sub-agent typically creates precisely the other of specialisation. Now each agent is studying every part, inheriting everybody else’s errors, and paying the identical context invoice over and over.
If just some context is shared, a brand new drawback seems. What is taken into account authoritative when brokers disagree? What stays native, and the way are conflicts reconciled?
The best way out is to deal with communication not as shared reminiscence, however as state switch via well-defined interfaces.
For discrete duties with clear inputs and outputs, brokers ought to often talk via artefacts somewhat than uncooked traces. An internet-search agent, as an example, doesn’t have to cross alongside its complete searching historical past. It solely must floor the fabric that downstream brokers can really use.
Which means intermediate reasoning, failed makes an attempt, and exploration traces keep personal except explicitly wanted. What will get handed ahead are distilled outputs: extracted details, validated findings, or choices that constrain the subsequent step.
For extra tightly coupled duties, like a debugging agent the place downstream reasoning genuinely will depend on prior makes an attempt, a restricted type of hint sharing could be launched. However this must be deliberate and scoped, not the default.
KV cache penalty
When AI fashions generate textual content, they typically repeat most of the identical calculations. KV caching is an inference time optimisation method that hastens this course of by remembering essential info from earlier steps as a substitute of recomputing every part once more.
Nevertheless, in multi-agent programs, if each agent shares the identical context, you confuse the mannequin with a ton of irrelevant particulars and pay an enormous KV-cache penalty. A number of brokers engaged on the identical activity want to speak with one another, however this shouldn’t be by way of sharing reminiscence.
For this reason brokers ought to talk via minimal, structured outputs in a managed method.
Preserve the agent’s toolset small and related
Software alternative is a context drawback disguised as a functionality drawback.
As an agent accumulates extra instruments, the motion area will get tougher to navigate. There may be now a better likelihood of the mannequin happening the incorrect motion and taking an inefficient route.
This has penalties. Software schemas have to be much more distinct than most individuals realise. Instruments need to be effectively understood and have minimal overlap in performance. It must be very clear on what their meant use is and have clear enter parameters which might be unambiguous.
One widespread failure mode that I seen even in my workforce is that we are likely to have very bloated units of instruments which might be added over time. This results in unclear choice making on which instruments to make use of.
Agentic reminiscence
It is a a way the place the agent commonly writes notes continued to reminiscence outdoors of the context window. These notes get pulled again into the context window at later occasions.
The toughest half is deciding what deserves promotion into reminiscence. My rule of thumb is that sturdy reminiscence ought to include issues that proceed to constrain future reasoning: persistent preferences. All the pieces else ought to have a really excessive bar. Storing an excessive amount of is simply one other route again to context air pollution, solely now you have got made it persistent.
However reminiscence with out revision is a lure. As soon as brokers persist notes throughout steps or classes, in addition they want mechanisms for battle decision, deletion, and demotion. In any other case long-term reminiscence turns into a landfill of outdated beliefs.
To sum up
Context engineering remains to be evolving, and there’s no single appropriate approach to do it. A lot of it stays empirical, formed by the programs we construct and the constraints we function below.
Left unchecked, context grows, drifts, and ultimately collapses below its personal weight.
If well-managed, context turns into the distinction between an agent that merely responds and one that may cause, adapt, and keep coherent throughout lengthy and complicated duties.



