Do recommendation strategies beyond preference questions work better?
What role do sociable conversational moves—opinion sharing, encouragement, credibility signals—play in successful human recommendations, compared to simply asking what someone likes?
The dominant CRS framing treats recommendation as a preference-elicitation problem: ask the user what they like, narrow the candidate set, recommend. The INSPIRED dataset shows this framing is reductive. Across 1,001 human-human movie recommendation dialogues, successful recommendations correlate with sociable strategies — not just preference questions.
The annotation scheme grounds each strategy in social science. Personal opinion expresses subjective takes on the movie. Personal experience shares the recommender's history with it. Similarity is empathizing or being like-minded. Encouragement praises the seeker's taste and promotes the candidate. Offering help is transparent about intention. Preference confirmation rephrases what the seeker said. Self-modeling has the recommender act first to model behavior. Credibility shows expertise via factual information. Preference elicitation inquiries — experience inquiry, opinion inquiry — are also annotated as a separate category.
The empirical pattern: 30% of recommendation sentences are paired with experience inquiries, 27% with encouragement, 14% with personal opinion. Successful recommendations require building rapport, signaling expertise, and showing the recommender as an interlocutor with their own perspective — not just a preference-extraction machine. Trust theory and homophily theory underpin why: humans accept recommendations more readily from those they perceive as similar, expert, or transparent.
The consequence for CRS design is that purely task-oriented architectures (ask preferences, retrieve candidates, present) miss the persuasion mechanics that make humans accept recommendations. Sociable elements — opinion-sharing, credibility appeals, encouragement — are not chitchat to be tolerated but functional mechanisms that improve acceptance rates.
Source: Recommenders Conversational
Related concepts in this collection
-
Do simulated training interactions transfer to real conversations?
Most conversational recommender systems train on simulated entity-level exchanges, not natural dialogue. The question is whether models built this way actually work when deployed with real users who speak naturally and deviate from expected patterns.
grounds: INSPIRED is the empirical evidence that real human CRS dialogues use sociable strategies entity-level simulators cannot generate
-
What makes conversational recommenders hard to build well?
Most assume the challenge is language fluency, but what if the real problem is managing mixed-initiative dialogue—where both users and systems take turns driving the conversation?
extends: sociability is a layer beyond preference-elicitation that bounded-task framing must accommodate
-
Can review sentiment alignment fix sparse CRS dialogue?
Conversational recommender systems struggle with brief dialogues that lack item-specific detail. Can retrieving reviews that match user sentiment polarity enrich both dialogue context and response generation?
complements: review augmentation supplies the content for sociable strategies — encouragement and similarity-claims need review-derived material
-
Do chatbots help people disclose more intimate secrets?
Explores whether the judgment-free nature of chatbot conversations enables deeper self-disclosure than talking to humans, and whether that deeper disclosure produces psychological benefits.
tension with: humans use sociable strategies in CRS; chatbots are sometimes preferred precisely because they don't — CRS must navigate which mode users want
-
How do users mentally model dialogue agent partners?
Exploring what dimensions matter when people form impressions of machine dialogue partners—and whether competence, human-likeness, and flexibility all play equal roles in shaping user expectations and behavior.
complements: sociability dimensions correlate with human-likeness and communicative flexibility factors of partner models
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
sociable recommendation strategies outperform pure preference elicitation in human-human dialogues