Do different recommender types shape opinion convergence differently?
Explores whether the mechanism by which products are recommended—buying together versus viewing together—creates distinct patterns in how product ratings converge or diverge across a network.
Online stores frequently use multiple recommender algorithms simultaneously. Amazon, for instance, has both "Frequently bought together" and "Customers who viewed this item also viewed" recommendation lists. Each is trained differently and recommends different groups of products. Each creates a different product network — the structure of which products link to which other products via recommendation.
The Maleki Shoja and Tabrizi finding is that the network type matters for opinion convergence. Whether a pair of connected products has converging ratings (similar reviews) or diverging ratings (different reviews) depends on which type of recommender created the link. Frequently-bought-together networks tend to produce one pattern of convergence; co-viewed networks produce another.
This decouples the question of "do recommendations affect ratings" from "which kind of recommendation does what." The mechanism: different recommendation types nudge different population subsets to encounter different items, and those subsets bring different prior expectations. People who buy two items together for a specific use-case develop a different review pattern than people who view both but might buy only one. The recommender shapes both the audience and the comparative frame, which shapes the ratings.
The practical implication for platforms: choosing which recommender to deploy is not just a click-rate decision. It actively shapes the rating ecosystem — what reviews look like, how they correlate, what kind of word-of-mouth propagates. The platform's recommender choice is upstream of the data the platform later analyzes for product insights, creating a feedback loop the platform might not be aware it's creating.
Source: Recommenders General
Related concepts in this collection
-
How do feed ranking weights shape what content gets produced?
Feed-ranking weights are typically treated as neutral tuning parameters, but do they actually function as political levers that reshape producer behavior and the content supply itself?
extends: weights shape consumer-side opinion convergence in addition to producer-side feed behavior
-
Can LLM agents realistically simulate filter bubble effects in recommendations?
Can generative agents with emotion and memory modules faithfully reproduce how recommendation systems create echo chambers and user fatigue? This matters because real-world A/B testing is expensive and slow.
exemplifies in domain: Agent4Rec is the methodological tool for studying exactly the opinion-convergence dynamics this insight names
-
Can graph structure patterns outperform direct edge signals in noisy data?
When user-behavior data is messy and unreliable, does looking at structural patterns across multiple edges produce better product recommendations than counting simple co-occurrences? This matters because e-commerce platforms need robust substitute graphs at billion-scale.
complements: the algorithm choice determines what kind of product network gets built — substitute vs complement networks differ in convergence properties
-
Do online ratings actually reflect independent customer opinions?
How much do previously-posted ratings shape the ones that come after, and does this social influence distort what ratings supposedly measure? Understanding this matters for anyone relying on review aggregates to judge product quality.
complements: opinion convergence at network level and rating influence at user level are layered population dynamics
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
recommendation systems shape opinion convergence based on the type of product network they create