How should language models integrate into recommender systems?
When building recommendation systems with LLMs, should you use them as feature encoders, token generators, or direct recommenders? The choice affects efficiency, bias, and compatibility with existing pipelines.
The Wu et al. survey of LLM-based recommendation organizes the field into three paradigms with distinct architectures and trade-offs.
LLM Embeddings + RS treats the language model as a feature extractor. Item and user features feed into the LLM, which outputs corresponding embeddings. A traditional recommender model consumes these knowledge-aware embeddings for recommendation tasks. The LLM doesn't make recommendations; it enriches representations.
LLM Tokens + RS goes a step further. The LLM generates semantic tokens based on item and user features. These tokens capture preferences through semantic mining and feed into the decision-making of a recommendation system. Tokens are denser than full embeddings and easier to integrate into existing pipelines.
LLM as RS is the direct paradigm. The pre-trained LLM is transferred into a recommendation system, with input sequences containing profile descriptions, behavior prompts, and task instructions. The LLM directly outputs recommendations. This is the most ambitious paradigm and faces challenges around position bias, popularity bias, and fairness bias inherent to language models.
The three paradigms differ in efficiency, latency, and how much they leverage existing recommendation infrastructure. Embeddings are most compatible with existing pipelines but underuse LLM capability. Direct LLM-as-RS maximizes LLM use but introduces LLM-specific biases and latency. Tokens are an intermediate point. Choice depends on what the deployment can tolerate — production latency, existing pipeline investment, and tolerance for LLM-specific biases all factor in.
The survey's framing is methodologically useful: rather than treating "LLM-based recommendation" as one thing, naming the three paradigms clarifies which problems different research efforts are actually solving.
Source: Recommenders General
Related concepts in this collection
-
Can LLMs gain collaborative filtering strength without losing text understanding?
LLM recommenders excel at cold-start through text semantics but struggle with warm interactions where collaborative patterns matter most. Can external collaborative models be integrated into LLM reasoning to close this gap?
exemplifies: CoLLM is the embeddings-into-tokens instantiation of the three-paradigm taxonomy
-
Can one text encoder unify all recommendation tasks?
Does framing diverse recommendation problems—from sequential prediction to review generation—as natural language tasks allow a single model to learn shared structure? Can this approach generalize to unseen items and new task phrasings?
exemplifies: P5 is the direct-LLM-as-recommender paradigm executed end-to-end
-
Does LLM input augmentation beat direct LLM recommendation?
Can LLMs enrich item descriptions more effectively than making recommendations directly? This explores whether specialized models work better when LLMs focus on what they do best: content understanding rather than ranking.
tension with: LLM-Rec argues input-augmentation beats LLM-as-recommender empirically — direct integration is not the strongest paradigm in many tasks
-
Can item identifiers balance uniqueness and semantic meaning?
Should LLM-based recommenders prioritize distinctive item references or semantic understanding? This explores whether a hybrid approach can overcome the tradeoffs forced by pure ID or pure text indexing.
complements: multi-facet IDs are the item-indexing primitive that all three paradigms need
-
Where do recommendation biases come from in language models?
Do LLM-based recommenders inherit systematic biases from pretraining that differ fundamentally from traditional collaborative filtering systems? Understanding these sources matters for building fairer, more accurate recommendations.
complements: each paradigm inherits the biases differently — direct generation worst, input-augmentation least
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
LLM as recommender has three integration paradigms — embeddings tokens or directly as the recommendation system