Knowledge Retrieval and RAG Recommender Systems

How should LLM-based recommenders retrieve from massive item corpora?

When conversational recommenders need to search millions of items, the LLM cannot memorize the corpus. What retrieval strategies work best under different constraints, and how do they trade off latency, sample efficiency, and scalability?

Note · 2026-05-03 · sourced from Recommenders Conversational
What breaks when specialized AI models reach real users? Where do retrieval systems break and why?

When a conversational recommender system must retrieve from a corpus of hundreds of millions of items (videos, products, urls), the LLM cannot memorize the corpus in its parameters. The CRS must connect the LLM to an external retrieval mechanism. RecLLM identifies four distinct strategies, each with distinct architectural and efficiency profiles.

Generalized Dual Encoder Model: the standard production retrieval pattern. One neural net tower encodes context (the conversation), another encodes items. Item embeddings are precomputed offline and stored in an approximate-nearest-neighbor index. The conversation is encoded online and used to query the ANN index. Sub-linear retrieval time. Pulling embeddings from the LLM internals hampers sample efficiency because dual-encoder training from scratch needs vast data to align the two embedding spaces.

Direct LLM Search: the LLM directly outputs item ids or titles as text. Recommendation engine plays no role beyond fuzzy or exact matching against the corpus. The LLM must learn to output valid ids/titles through pretraining and fine-tuning. Works for small corpora the LLM can memorize; fails for huge corpora.

Concept Based Search: the LLM outputs a list of concepts (e.g., "romantic comedies set in Italy"). The recommendation engine embeds these concepts, aggregates them into a single context embedding, and uses ANN search. Concept extraction is a natural LLM task — in-context learning or tuning teaches it. The LLM does the linguistic work; the engine does the retrieval.

Search API Lookup: the LLM outputs a search query that gets fed into a black-box search API. The API returns items. This treats search as a tool the LLM calls, with no shared embedding space. Works whenever a search API is available; trivially supports proprietary corpora.

The taxonomy matters for system design. The choice depends on corpus size, latency budget, LLM training data availability, and whether you can train a dual encoder. Each strategy addresses a different combination of constraints. Mixing them in a hybrid system is feasible — concept-based for breadth, dual-encoder for personalization, search-API for proprietary inventory — and probably necessary for any large CRS.


Source: Recommenders Conversational

Related concepts in this collection

Concept map
14 direct connections · 90 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

large-corpus LLM CRS retrieval has four distinct strategies — dual-encoder concept-based direct LLM search and search API lookup