Do online ratings actually reflect independent customer opinions?
How much do previously-posted ratings shape the ones that come after, and does this social influence distort what ratings supposedly measure? Understanding this matters for anyone relying on review aggregates to judge product quality.
The standard implicit assumption when reading online ratings is that each rating is an independent observation of customer experience: average them and you have an estimate of product quality. Moe and Trusov's analysis decomposes observed ratings into a baseline ratings component (the consumer's "socially unbiased" evaluation), a social-dynamics component (the influence of previously-posted ratings), and an idiosyncratic error component, then models product sales as a function of these components.
The findings are nuanced. Substantial social dynamics exist in the ratings environment — previously-posted ratings influence subsequent ones. These dynamics have both direct effects on sales (changes in average rating drive immediate purchases) and indirect effects (today's ratings influence tomorrow's ratings, which affect future sales). Some of the indirect effects mitigate long-term impact: when opinion variance is high, the social-dynamics-induced shifts get averaged out over time.
But the headline conclusion is that observed ratings do not always accurately reflect product performance. Even before Schlosser's negativity-bias finding or Hu et al.'s self-selection result, this paper documents that ratings are influenced by prior ratings. Marketers, recognizing this, invest in creating favorable ratings environments — not because they expect to fool customers but because the system actually works that way.
For recommender systems consuming ratings as input: the data is socially-conditioned, not just preference-conditioned. Treating ratings as independent observations leads to biased estimates of product quality and consequently biased recommendations toward whatever ratings dynamics happened to favor early. The fix is structural, not statistical: model the social conditioning explicitly.
Source: Recommenders General
Related concepts in this collection
-
Why do online reviewers publish negative ratings despite positive experiences?
When people post reviews publicly, do they adjust their honest opinions to seem more discerning? Schlosser's experiments test whether audience awareness shifts how people rate products compared to private ratings.
extends: the social-dynamics-shaping-future-ratings finding adds the temporal compounding to Schlosser's audience-effect finding
-
Do online reviews actually measure product quality or just buyer preferences?
Online reviews come only from customers who already expected to like a product. This self-selection might hide the true quality signal beneath layers of preference bias and writing motivation. What can aggregated ratings actually tell us?
complements: self-selection and social-dynamics together describe the multi-layered non-independence of public ratings
-
Why do the same users rate items differently each time?
User ratings are assumed to be clean preference signals, but do they actually fluctuate unpredictably? This matters because recommender systems rely on ratings as ground truth, yet temporal inconsistency and individual rating styles may contaminate that signal.
complements: this adds a between-user noise dimension to the within-user noise Amatriain documents
-
Do different recommender types shape opinion convergence differently?
Explores whether the mechanism by which products are recommended—buying together versus viewing together—creates distinct patterns in how product ratings converge or diverge across a network.
complements: same opinion-shaping mechanism at network level — recommender networks shape product reputation as social dynamics shape rating reputation
-
Why do people bother writing online ratings at all?
People rate products without pay or recognition, yet do it anyway. Understanding what motivates raters—and how costs affect who rates—reveals why rating distributions may not reflect true customer satisfaction.
complements: who chooses to rate amplifies social-dynamics effects — strong-opinion raters drive the future-rating influence
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
online ratings have small social-dynamics effects that compound through future-rating influence — ratings forums are not independent observations