Recommender Systems Conversational AI Systems

Do user outputs outperform inputs for LLM personalization?

Does a user's history of outputs (responses, endorsed content) matter more for personalization than their input queries? This explores what actually drives effective personalization in language models.

Note · 2026-02-23 · sourced from Personalization
How do people come to trust conversational AI systems? How do you build domain expertise into general AI models? How should researchers navigate LLM reasoning research?

A study on user profile roles in LLM personalization surfaces a counterintuitive finding: the outputs users have produced or endorsed matter far more than the inputs they submitted. Using only the output part of user profiles achieves comparable or even superior performance to complete profiles across multiple LaMP tasks. Using only the input part leads to noticeable degradation.

This finding separates personalization from two adjacent paradigms:

Personalization ≠ RAG. Retrieval-augmented generation relies on semantic similarity between the input query and retrieved documents. Personalization works through a different mechanism — it is the style, preferences, and judgments expressed in historical responses that calibrate the model, not the semantic content of past queries.

Personalization ≠ ICL. In-context learning uses complete input-output pairs as demonstrations. Personalization requires only the output side — the response patterns that reveal who the user is and what they value.

The practical implication: when designing personalization systems under input length constraints, prioritize incorporating user-generated or user-approved responses over query histories. This unlocks the potential to include many more user profiles within limited context windows, because output-only profiles are both more effective and more compact than complete interaction histories.

A secondary finding adds a structural dimension: user profiles integrated closer to the beginning of the input context have more influence on personalization than those placed elsewhere. This parallels the positional bias documented in ICL — since How much does demo position alone affect in-context learning accuracy?, the spatial attention pattern appears to be domain-general, affecting personalization placement decisions as well as few-shot learning.

The output-over-input finding connects to the broader question of what personalization is. Since Can text summaries condition reward models better than embeddings?, the PLUS approach of training a summarizer to extract preference dimensions rather than topic summaries from user history is vindicated — preference dimensions are properties of outputs, not inputs.


Source: Personalization

Related concepts in this collection

Concept map
15 direct connections · 111 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

historical user outputs drive personalization more effectively than input queries — personalization information not semantic information is the active ingredient