Recommender Systems

Can language models discover what users actually want from activity logs?

Users pursue month-long interest journeys that transcend individual item clicks. Can LLMs extract these persistent goals from behavioral patterns, and does this change how we should think about personalization?

Note · 2026-02-23 · sourced from Design Frameworks
How do recommendation feeds shape what people see and believe? How do people come to trust conversational AI systems? How do you build domain expertise into general AI models? How should researchers navigate LLM reasoning research?

Recommender systems predict the next item a user might click on, given their history. But when you ask users what they're actually doing on the platform, they describe something different: persistent, overarching interests — "designing hydroponic systems for small spaces," "learning the ukulele as a beginner," "cooking Italian recipes." These are interest journeys, and they operate at a completely different level of abstraction from next-item prediction.

Survey data shows 66% of respondents recently pursued a valued journey on the platform. Of those, 80% consumed relevant content for more than a month, with half saying some journeys last more than a year. People pursue 1-3 journeys simultaneously.

The semantic gap is real: collaborative filtering captures correlational patterns between items ("people who watched X also watched Y") but cannot reason about the user's underlying goal, need, or interest. Two users both interested in stand-up comedy may pursue completely different aspects — history documentaries vs. SNL skits. The journey is personalized at a granularity collaborative filtering cannot reach.

LLMs can bridge this gap. Through personalized clustering of user activity logs followed by LLM-powered journey naming, the system produces journey descriptions users identify with. But specificity matters — "greenhouse designs for cold climates" was irrelevant for someone pursuing indoor gardening. The right level of abstraction is what the user would actually say to a friend asking about their interests.

This connects to How do personalization granularity levels trade precision against scalability? — interest journeys operate at the user level but require persona-level precision. Since Does chatbot personalization build trust or expose privacy risks?, journey-aware systems that understand your persistent interests will trigger both the trust and privacy dimensions of this dual dynamic.


Source: Design Frameworks

Related concepts in this collection

Concept map
14 direct connections · 106 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLMs can discover and describe persistent user interest journeys from activity patterns but recommender systems predict next items instead