Why do people bother writing online ratings at all?
People rate products without pay or recognition, yet do it anyway. Understanding what motivates raters—and how costs affect who rates—reveals why rating distributions may not reflect true customer satisfaction.
A foundational question in recommendation: why do people write online ratings at all? There's no immediate reward, the audience is anonymous, and writing takes time. Lafky's experimental approach isolates motivations by manipulating cost-to-rate.
The findings invert several assumptions. First, raters care about both buyers and sellers — they are not purely altruistic toward fellow shoppers, nor purely punitive toward bad merchants. The distribution of ratings reflects mixed motivations rather than a single coherent goal.
Second, when rating becomes more attractive (no cost), people rate broadly across the satisfaction spectrum. When rating has a small cost imposed, the distribution of ratings becomes U-shaped — only people with very strong opinions, positive or negative, find it worth the effort. The middle of the distribution (mildly satisfied or mildly dissatisfied users) drops out. This biases the average rating away from true quality, since true quality is in the middle.
Third, providing small discounts to consumers who rate is a possible solution: it compensates for the cost and recovers participation across the satisfaction range. The general lesson for recommender systems consuming ratings: the rating distribution is not a sample from satisfaction; it is a sample from satisfaction-among-people-who-found-it-worth-rating, with an added cost-of-rating filter that makes the distribution non-representative. Small policy choices about how easy rating is and what compensation it carries dramatically affect what ratings the system sees — and thus what recommendations it produces.
Source: Recommenders General
Related concepts in this collection
-
Do online reviews actually measure product quality or just buyer preferences?
Online reviews come only from customers who already expected to like a product. This self-selection might hide the true quality signal beneath layers of preference bias and writing motivation. What can aggregated ratings actually tell us?
extends: U-shape distribution and selection-bias compound — strong-opinion raters self-select into the rating action after self-selecting into purchase
-
Why do online reviewers publish negative ratings despite positive experiences?
When people post reviews publicly, do they adjust their honest opinions to seem more discerning? Schlosser's experiments test whether audience awareness shifts how people rate products compared to private ratings.
complements: who chooses to rate amplifies audience-driven negativity — strong-opinion raters drive public skew
-
Do online ratings actually reflect independent customer opinions?
How much do previously-posted ratings shape the ones that come after, and does this social influence distort what ratings supposedly measure? Understanding this matters for anyone relying on review aggregates to judge product quality.
complements: U-shape distribution is the initial state; social-dynamics shape the trajectory
-
Why do the same users rate items differently each time?
User ratings are assumed to be clean preference signals, but do they actually fluctuate unpredictably? This matters because recommender systems rely on ratings as ground truth, yet temporal inconsistency and individual rating styles may contaminate that signal.
complements: rater idiosyncrasy and self-selection of strong-opinion raters together describe the multiple noise sources in observed ratings
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
why-people-rate motivations include both buyer concern and seller anger — small participation costs produce U-shaped distributions