Recommender Systems

How should language models integrate into recommender systems?

When building recommendation systems with LLMs, should you use them as feature encoders, token generators, or direct recommenders? The choice affects efficiency, bias, and compatibility with existing pipelines.

Note · 2026-05-03 · sourced from Recommenders General
What breaks when specialized AI models reach real users?

The Wu et al. survey of LLM-based recommendation organizes the field into three paradigms with distinct architectures and trade-offs.

LLM Embeddings + RS treats the language model as a feature extractor. Item and user features feed into the LLM, which outputs corresponding embeddings. A traditional recommender model consumes these knowledge-aware embeddings for recommendation tasks. The LLM doesn't make recommendations; it enriches representations.

LLM Tokens + RS goes a step further. The LLM generates semantic tokens based on item and user features. These tokens capture preferences through semantic mining and feed into the decision-making of a recommendation system. Tokens are denser than full embeddings and easier to integrate into existing pipelines.

LLM as RS is the direct paradigm. The pre-trained LLM is transferred into a recommendation system, with input sequences containing profile descriptions, behavior prompts, and task instructions. The LLM directly outputs recommendations. This is the most ambitious paradigm and faces challenges around position bias, popularity bias, and fairness bias inherent to language models.

The three paradigms differ in efficiency, latency, and how much they leverage existing recommendation infrastructure. Embeddings are most compatible with existing pipelines but underuse LLM capability. Direct LLM-as-RS maximizes LLM use but introduces LLM-specific biases and latency. Tokens are an intermediate point. Choice depends on what the deployment can tolerate — production latency, existing pipeline investment, and tolerance for LLM-specific biases all factor in.

The survey's framing is methodologically useful: rather than treating "LLM-based recommendation" as one thing, naming the three paradigms clarifies which problems different research efforts are actually solving.


Source: Recommenders General

Related concepts in this collection

Concept map
13 direct connections · 59 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLM as recommender has three integration paradigms — embeddings tokens or directly as the recommendation system