Do online reviews actually measure product quality or just buyer preferences?
Online reviews come only from customers who already expected to like a product. This self-selection might hide the true quality signal beneath layers of preference bias and writing motivation. What can aggregated ratings actually tell us?
If consumers were homogeneous in preferences, their reviews would directly reveal product quality — average a few of them and you have an unbiased estimate. Heterogeneity breaks this both ex ante and ex post. Ex ante, only consumers who expected to be satisfied chose to purchase; consumers who would have hated the product never bought it and so never wrote anything. Ex post, only the reviews of consumers who purchased are observable. Both filters select on idiosyncratic preferences correlated with satisfaction.
This produces several non-obvious effects studied across the review-aggregation literature. Hu, Zhang, and Pavlou's self-selection paper shows that early buyers' idiosyncratic preferences propagate into long-term purchase behavior — early reviewers shape what later buyers think the product is. Besbes and Scarsini formalize the question of whether consumers can learn product quality from reviews despite the bias, finding that altruistic reviewers (those writing about intrinsic product quality rather than personal experience) enable social learning while subjective reviewers do not. The combination of these two findings is uncomfortable: self-selection ensures the reviewer pool is biased, and many reviewers write about themselves rather than the product, which means social learning from ratings is conditional on reviewer motivation.
Acemoglu et al.'s "Fast and Slow Learning From Reviews" shows that more information does not always lead to faster learning — strictly finer rating systems do, but adding summary statistics can slow learning by amplifying selection effects. The general point: the rating distribution you see is not the satisfaction distribution among all potential customers. It is the satisfaction distribution among self-selected purchasers who chose to write, with the writing motivation itself a confound. Treating average rating as a quality estimate ignores both filters.
Source: Recommenders General
Related concepts in this collection
-
Why do the same users rate items differently each time?
User ratings are assumed to be clean preference signals, but do they actually fluctuate unpredictably? This matters because recommender systems rely on ratings as ground truth, yet temporal inconsistency and individual rating styles may contaminate that signal.
complements: rating noise and selection bias compound — observed ratings have multiple layers of contamination
-
Why do online reviewers publish negative ratings despite positive experiences?
When people post reviews publicly, do they adjust their honest opinions to seem more discerning? Schlosser's experiments test whether audience awareness shifts how people rate products compared to private ratings.
complements: who reviews and how they review are both biased — the survivorship problem (selection) and the audience problem (negativity)
-
Do online ratings actually reflect independent customer opinions?
How much do previously-posted ratings shape the ones that come after, and does this social influence distort what ratings supposedly measure? Understanding this matters for anyone relying on review aggregates to judge product quality.
complements: selection bias provides the initial distribution; social-dynamics shape the trajectory
-
Why do people bother writing online ratings at all?
People rate products without pay or recognition, yet do it anyway. Understanding what motivates raters—and how costs affect who rates—reveals why rating distributions may not reflect true customer satisfaction.
extends: self-selection at the rating-decision step compounds with self-selection at the purchase step — U-shape comes from extreme-opinion raters self-selecting
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
online review aggregation has structural self-selection biases — only customers who expected satisfaction purchase and review