How should LLM-based recommenders retrieve from massive item corpora?
When conversational recommenders need to search millions of items, the LLM cannot memorize the corpus. What retrieval strategies work best under different constraints, and how do they trade off latency, sample efficiency, and scalability?
When a conversational recommender system must retrieve from a corpus of hundreds of millions of items (videos, products, urls), the LLM cannot memorize the corpus in its parameters. The CRS must connect the LLM to an external retrieval mechanism. RecLLM identifies four distinct strategies, each with distinct architectural and efficiency profiles.
Generalized Dual Encoder Model: the standard production retrieval pattern. One neural net tower encodes context (the conversation), another encodes items. Item embeddings are precomputed offline and stored in an approximate-nearest-neighbor index. The conversation is encoded online and used to query the ANN index. Sub-linear retrieval time. Pulling embeddings from the LLM internals hampers sample efficiency because dual-encoder training from scratch needs vast data to align the two embedding spaces.
Direct LLM Search: the LLM directly outputs item ids or titles as text. Recommendation engine plays no role beyond fuzzy or exact matching against the corpus. The LLM must learn to output valid ids/titles through pretraining and fine-tuning. Works for small corpora the LLM can memorize; fails for huge corpora.
Concept Based Search: the LLM outputs a list of concepts (e.g., "romantic comedies set in Italy"). The recommendation engine embeds these concepts, aggregates them into a single context embedding, and uses ANN search. Concept extraction is a natural LLM task — in-context learning or tuning teaches it. The LLM does the linguistic work; the engine does the retrieval.
Search API Lookup: the LLM outputs a search query that gets fed into a black-box search API. The API returns items. This treats search as a tool the LLM calls, with no shared embedding space. Works whenever a search API is available; trivially supports proprietary corpora.
The taxonomy matters for system design. The choice depends on corpus size, latency budget, LLM training data availability, and whether you can train a dual encoder. Each strategy addresses a different combination of constraints. Mixing them in a hybrid system is feasible — concept-based for breadth, dual-encoder for personalization, search-API for proprietary inventory — and probably necessary for any large CRS.
Source: Recommenders Conversational
Related concepts in this collection
-
How can LLM agents handle huge candidate lists without breaking?
ReAct agents fail when retrieval tools return hundreds of items that overflow prompts. What architectural changes let LLMs work effectively with large candidate sets in recommendation systems?
extends: InteRecAgent's candidate bus is one architectural answer to the large-corpus retrieval problem this taxonomy frames
-
How should language models integrate into recommender systems?
When building recommendation systems with LLMs, should you use them as feature encoders, token generators, or direct recommenders? The choice affects efficiency, bias, and compatibility with existing pipelines.
complements: parallel taxonomy at the integration level — these four retrieval strategies sit underneath the three integration patterns
-
Do vector embeddings actually measure task relevance?
Vector embeddings rank semantic similarity, but RAG systems need topical relevance. When these diverge—as with king/queen versus king/ruler—does similarity-based retrieval fail in production?
tension with: dual-encoder retrieval inherits the semantic-similarity-not-task-relevance failure — concept-based and search-API strategies are partial answers
-
Why do queries and documents occupy different embedding spaces?
Queries and documents express the same information in fundamentally different ways—short and interrogative versus long and declarative. Understanding this mismatch is crucial for why direct embedding retrieval often fails.
complements: HyDE's hypothetical-document trick is a related bridge between conversational input and item index — concept-based search uses LLM-generated concepts the same way
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
large-corpus LLM CRS retrieval has four distinct strategies — dual-encoder concept-based direct LLM search and search API lookup