In my newest publish, I how hybrid search might be utilised to considerably enhance the effectiveness of a RAG pipeline. RAG, in its fundamental model, utilizing simply semantic search on embeddings, might be very efficient, permitting us to utilise the facility of AI in our personal paperwork. Nonetheless, semantic search, as highly effective as it’s, when utilised in giant information bases, can generally miss actual matches of the consumer’s question, even when they exist within the paperwork. This weak point of conventional RAG might be handled by including a key phrase search element within the pipeline, like BM25. On this means, hybrid search, combining semantic and key phrase search, results in far more complete outcomes and considerably improves the efficiency of a RAG system.
Be that as it might, even when utilizing RAG with hybrid search, we will nonetheless generally miss necessary info that’s scattered in numerous components of the doc. This will occur as a result of when a doc is damaged down into textual content chunks, generally the context — that’s, the encompassing textual content of the chunk that kinds a part of its which means — is misplaced. This will particularly occur for textual content that’s advanced, with which means that’s interconnected and scattered throughout a number of pages, and inevitably can’t be wholly included inside a single chunk. Suppose, for instance, referencing a desk or a picture throughout a number of completely different textual content sections with out explicitly defining to which desk we’re refering to (e.g., “as shown in the Table, profits increased by 6%” — which desk?). Consequently, when the textual content chunks are then retrieved, they’re stripped down of their context, generally ensuing within the retrieval of irrelevant chunks and technology of irrelevant responses.
This lack of context was a serious subject for RAG techniques for a while, and a number of other not-so-successful options have been explored for enhancing it. An apparent try for enhancing this, is rising chunk measurement, however this usually additionally alters the semantic which means of every chunk and finally ends up making retrieval much less exact. One other method is rising chunk overlap. Whereas this helps to extend the preservation of context, it additionally will increase storage and computation prices. Most significantly, it doesn’t totally remedy the issue — we will nonetheless have necessary interconnections to the chunk out of chunk boundaries. Extra superior approaches trying to unravel this problem embrace Hypothetical Doc Embeddings (HyDE) or Doc Abstract Index. Nonetheless, these nonetheless fail to supply substantial enhancements.
Finally, an method that successfully resolves this and considerably enhances the outcomes of a RAG system is contextual retrieval, initially launched by Anthropic in 2024. Contextual retrieval goals to resolve the lack of context by preserving the context of the chunks and, subsequently, enhancing the accuracy of the retrieval step of the RAG pipeline.
. . .
What about context?
Earlier than saying something about contextual retrieval, let’s take a step again and discuss just a little bit about what context is. Certain, we’ve all heard in regards to the context of LLMs or context home windows, however what are these about, actually?
To be very exact, context refers to all of the tokens which might be accessible to the LLM and based mostly on which it predicts the following phrase — keep in mind, LLMs work by producing textual content by predicting it one phrase at a time. Thus, that would be the consumer immediate, the system immediate, directions, abilities, or another pointers influencing how the mannequin produces a response. Importantly, the a part of the ultimate response the mannequin has produced to date can also be a part of the context, since every new token is generated based mostly on all the things that got here earlier than it.
Apparently, completely different contexts result in very completely different mannequin outputs. For instance:
- ‘I went to a restaurant and ordered a‘ may output ‘pizza.‘
- ‘I went to the pharmacy and acquired a‘ may output ‘medication.‘
A basic limitation of LLMs is their context window. The context window of an LLM is the utmost variety of tokens that may be handed without delay as enter to the mannequin and be taken into consideration to provide a single response. There are LLMs with bigger or smaller context home windows. Trendy frontier fashions can deal with lots of of hundreds of tokens in a single request, whereas earlier fashions usually had context home windows as small as 8k tokens.
In an ideal world, we might wish to simply move all the data that the LLM must know within the context, and we’d most definitely get excellent solutions. And that is true to some extent — a frontier mannequin like Opus 4.6 with a 200k token context window corresponds to about 500-600 pages of textual content. If all the data we have to present suits this measurement restrict, we will certainly simply embrace all the things as is, as an enter to the LLM and get an important reply.
The difficulty is that for many of real-world AI use instances, we have to make the most of some form of information base with a measurement that’s a lot past this threshold — assume, as an example, authorized libraries or manuals of technical tools. Since fashions have these context window limitations, we sadly can’t simply move all the things to the LLM and let it magically reply — we’ve to somwhow decide what is a very powerful info that must be included in our restricted context window. And that’s primarily what the RAG methodology is all about — choosing the suitable info from a big information base in order to successfully reply a consumer’s question. Finally, this emerges as an optimization/ engineering downside — context engineering — figuring out the suitable info to incorporate in a restricted context window, in order to provide the absolute best responses.
That is essentially the most essential a part of a RAG system — ensuring the suitable info is retrieved and handed over as enter to the LLM. This may be executed with semantic search and key phrase search, as already defined. However, even when bringing all semantically related chunks and all actual matches, there’s nonetheless a very good probability that some necessary info could also be left behind.
However what sort of info would this be? Since we’ve lined the which means with semantic search and the precise matches with key phrase search, what different kind of data is there to contemplate?
Completely different paperwork with inherently completely different meanings could embrace components which might be comparable and even an identical. Think about a recipe guide and a chemical processing guide each instructing the reader to ‘Heat the mixture slowly’. The semantic which means of such a textual content chunk and the precise phrases are very comparable — an identical. On this instance, what kinds the which means of the textual content and permit us to separate between cooking and chemnical engineering is what we’re reffering to as context.

Thus, that is the form of further info we intention to protect. And that is precisely what contextual retrieval does: preserves the context — the encompassing which means — of every textual content chunk.
. . .
What about contextual retrieval?
So, contextual retrieval is a strategy utilized in RAG aiming to protect the context of every chunk. On this means, when a bit is retrieved and handed over to the LLM as enter, we’re capable of protect as a lot of its preliminary which means as attainable — the semantics, the key phrases, the context — all of it.
To realize this, contextual retrieval means that we first generate a helper textual content for every chunk — particularly, the contextual textual content — that enables us to situate the textual content chunk within the unique doc it comes from. In apply, we ask an LLM to generate this contextual textual content for every chunk. To do that, we offer the doc, together with the precise chunk, in a single request to an LLM and immediate it to “present the context to situate the precise chunk within the doc“. A immediate for producing the contextual textual content for our Italian Cookbook chunk would look one thing like this:
the complete doc Italian Cookbook doc the chunk comes from
Right here is the chunk we wish to place throughout the context of the total doc.
the precise chunk
Present a short context that situates this chunk throughout the general
doc to enhance search retrieval. Reply solely with the concise
context and nothing else.The LLM returns the contextual textual content which we mix with our preliminary textual content chunk. On this means, for every chunk of our preliminary textual content, we generate a contextual textual content that describes how this particular chunk is positioned in its dad or mum doc. For our instance, this could be one thing like:
Context: Recipe step for simmering home made tomato pasta sauce.
Chunk: Warmth the combination slowly and stir often to stop it from sticking.Which is certainly much more informative and particular! Now there isn’t a doubt about what this mysterious combination is, as a result of all the data wanted for identiying whether or not we’re speaking about tomato sauce or laboratory starch options is conveniently included throughout the identical chunk.
From this level on, we cope with the preliminary chunk textual content and the contextual textual content as an unbreakable pair. Then, the remainder of the steps of RAG with hybrid search are carried out primarily in the identical means. That’s, we create embeddings which might be saved in a vector search and the BM25 index for every textual content chunk, prepended with its contextual textual content.

This method, so simple as it’s, leads to astonishing enhancements within the retrieval efficiency of RAG pipelines. In accordance with Anthropic, Contextual Retrieval improves the retrieval accuracy by a formidable 35%.
. . .
Lowering price with immediate caching
I hear you asking, “However isn’t this going to break the bank?“. Surprisingly, no.
Intuitively, we perceive that this setup goes to considerably enhance the price of ingestion for a RAG pipeline — primarily double it, if no more. In spite of everything we now added a bunch of additional calls to the LLM, didn’t we? That is true to some extent — certainly now, for every chunk, we make a further name to the LLM to be able to situate it inside its supply doc and get the contextual textual content.
Nonetheless, it is a price that we’re solely paying as soon as, on the stage of doc ingestion. In contrast to different methods that try and protect context at runtime — equivalent to Hypothetical Doc Embeddings (HyDE) — contextual retrieval performs the heavy work in the course of the doc ingestion stage. In runtime approaches, extra LLM calls are required for each consumer question, which might rapidly scale latency and operational prices. In distinction, contextual retrieval shifts the computation to the ingestion section, which means that the improved retrieval high quality comes with no extra overhead throughout runtime. On high of those, extra methods can be utilized for additional lowering the contextual retrieval price. Extra exactly, caching can be utilized for producing the abstract of the doc solely as soon as after which situating every chunk in opposition to the produced doc abstract.
. . .
On my thoughts
Contextual retrieval represents a easy but highly effective enchancment to conventional RAG techniques. By enriching every chunk with contextual textual content, pinpointing its semantic place inside its supply doc, we dramatically scale back the paradox of every chunk, and thus enhance the standard of the data handed to the LLM. Mixed with hybrid search, this method permits us to protect semantics, key phrases, and context concurrently.
Liked this publish? Let’s be pals! Be part of me on:
📰Substack 💌 Medium 💼LinkedIn ☕Purchase me a espresso!
All photos by the creator, besides talked about in any other case.



