Do user outputs outperform inputs for LLM personalization?
Does a user's history of outputs (responses, endorsed content) matter more for personalization than their input queries? This explores what actually drives effective personalization in language models.
A study on user profile roles in LLM personalization surfaces a counterintuitive finding: the outputs users have produced or endorsed matter far more than the inputs they submitted. Using only the output part of user profiles achieves comparable or even superior performance to complete profiles across multiple LaMP tasks. Using only the input part leads to noticeable degradation.
This finding separates personalization from two adjacent paradigms:
Personalization ≠ RAG. Retrieval-augmented generation relies on semantic similarity between the input query and retrieved documents. Personalization works through a different mechanism — it is the style, preferences, and judgments expressed in historical responses that calibrate the model, not the semantic content of past queries.
Personalization ≠ ICL. In-context learning uses complete input-output pairs as demonstrations. Personalization requires only the output side — the response patterns that reveal who the user is and what they value.
The practical implication: when designing personalization systems under input length constraints, prioritize incorporating user-generated or user-approved responses over query histories. This unlocks the potential to include many more user profiles within limited context windows, because output-only profiles are both more effective and more compact than complete interaction histories.
A secondary finding adds a structural dimension: user profiles integrated closer to the beginning of the input context have more influence on personalization than those placed elsewhere. This parallels the positional bias documented in ICL — since How much does demo position alone affect in-context learning accuracy?, the spatial attention pattern appears to be domain-general, affecting personalization placement decisions as well as few-shot learning.
The output-over-input finding connects to the broader question of what personalization is. Since Can text summaries condition reward models better than embeddings?, the PLUS approach of training a summarizer to extract preference dimensions rather than topic summaries from user history is vindicated — preference dimensions are properties of outputs, not inputs.
Source: Personalization
Related concepts in this collection
-
How much does demo position alone affect in-context learning accuracy?
Moving demonstrations from prompt start to end without changing their content produces surprisingly large accuracy swings. Does spatial position in the prompt matter more than what demonstrations actually contain?
positional bias extends to user profile placement
-
Can text summaries condition reward models better than embeddings?
Exploring whether learning interpretable text-based summaries of user preferences outperforms embedding vectors for training personalized reward models in language model alignment.
PLUS focuses on preference dimensions (output properties) not topics (input properties)
-
How do personalization granularity levels trade precision against scalability?
LLM personalization operates at user, persona, and global levels, each with different tradeoffs. Understanding these tradeoffs helps determine when to invest in individual user data versus broader patterns.
output-only profiles enable more data within length constraints at all granularity levels
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
historical user outputs drive personalization more effectively than input queries — personalization information not semantic information is the active ingredient