Can language models discover what users actually want from activity logs?
Users pursue month-long interest journeys that transcend individual item clicks. Can LLMs extract these persistent goals from behavioral patterns, and does this change how we should think about personalization?
Recommender systems predict the next item a user might click on, given their history. But when you ask users what they're actually doing on the platform, they describe something different: persistent, overarching interests — "designing hydroponic systems for small spaces," "learning the ukulele as a beginner," "cooking Italian recipes." These are interest journeys, and they operate at a completely different level of abstraction from next-item prediction.
Survey data shows 66% of respondents recently pursued a valued journey on the platform. Of those, 80% consumed relevant content for more than a month, with half saying some journeys last more than a year. People pursue 1-3 journeys simultaneously.
The semantic gap is real: collaborative filtering captures correlational patterns between items ("people who watched X also watched Y") but cannot reason about the user's underlying goal, need, or interest. Two users both interested in stand-up comedy may pursue completely different aspects — history documentaries vs. SNL skits. The journey is personalized at a granularity collaborative filtering cannot reach.
LLMs can bridge this gap. Through personalized clustering of user activity logs followed by LLM-powered journey naming, the system produces journey descriptions users identify with. But specificity matters — "greenhouse designs for cold climates" was irrelevant for someone pursuing indoor gardening. The right level of abstraction is what the user would actually say to a friend asking about their interests.
This connects to How do personalization granularity levels trade precision against scalability? — interest journeys operate at the user level but require persona-level precision. Since Does chatbot personalization build trust or expose privacy risks?, journey-aware systems that understand your persistent interests will trigger both the trust and privacy dimensions of this dual dynamic.
Source: Design Frameworks
Related concepts in this collection
-
How do personalization granularity levels trade precision against scalability?
LLM personalization operates at user, persona, and global levels, each with different tradeoffs. Understanding these tradeoffs helps determine when to invest in individual user data versus broader patterns.
journeys require user-level tracking with persona-level precision
-
Does chatbot personalization build trust or expose privacy risks?
Explores whether personalization features that increase user trust and social connection simultaneously heighten privacy concerns and create rising behavioral expectations over time.
journey awareness intensifies the dual dynamic
-
Can conversations themselves personalize without user profiles?
Can a conversational AI learn about user traits and adapt in real time by rewarding itself for asking insightful questions, rather than relying on pre-collected profiles or historical data?
curiosity reward could discover journeys incrementally
-
Does abstract preference knowledge outperform specific interaction recall?
Explores whether summarized user preferences are more effective for LLM personalization than retrieving individual past interactions. Tests a cognitive dual-memory model against real personalization performance across model scales.
interest journeys are natural semantic memory content: they abstract activity patterns into durable preference narratives rather than recalling individual interactions, which aligns with PRIME's finding that abstract knowledge outperforms episodic recall
-
Do user outputs outperform inputs for LLM personalization?
Does a user's history of outputs (responses, endorsed content) matter more for personalization than their input queries? This explores what actually drives effective personalization in language models.
interest journeys are discoverable from user output patterns (what they consumed, created, engaged with) rather than input queries, confirming that personalization signal lives in outputs not inputs
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
LLMs can discover and describe persistent user interest journeys from activity patterns but recommender systems predict next items instead