Why do online reviewers publish negative ratings despite positive experiences?
When people post reviews publicly, do they adjust their honest opinions to seem more discerning? Schlosser's experiments test whether audience awareness shifts how people rate products compared to private ratings.
Reviewing online is communication to a multiple audience — people who liked the product and people who didn't, simultaneously. Schlosser's experiments isolate a self-presentational mechanism that distinguishes posting from private rating. After reading a negative review, posters lower their public rating relative to the no-review and positive-review conditions, even when their personal experience with the product was favorable. Lurkers, who rate privately, show no such effect.
The mechanism: negative evaluators are perceived as more intelligent, competent, and expert than positive evaluators (Amabile 1983). Reading a negative review primes posters to worry their own positive opinion will look indiscriminate or low-standards, so they hedge downward to seem more discerning. The effect is asymmetric — positive reviews don't trigger the equivalent worry because positive reviewers don't carry the intelligence-signaling effect.
Posters also acknowledge multiple sides of the issue more than lurkers do, but they do not integrate the sides — they hold them as parallel parallel claims rather than synthesizing them. Lurkers, freer of social pressure, are more likely to integrate multiple viewpoints into a single coherent judgment.
This contradicts cognitive-tuning research (which predicts polarization toward attitudes), Grice's cooperative-principle maxims (which posters violate by suppressing their genuine positive experience), and the assumption that anticipated social interaction prevents negativity bias. The findings are specifically about multiple-audience public communication. The implication for recommender systems: aggregated ratings in multi-audience platforms systematically understate true average satisfaction for products that received any negative review, because every subsequent positive-experiencer adjusts their rating downward in public.
Source: Recommenders General
Related concepts in this collection
-
Do online reviews actually measure product quality or just buyer preferences?
Online reviews come only from customers who already expected to like a product. This self-selection might hide the true quality signal beneath layers of preference bias and writing motivation. What can aggregated ratings actually tell us?
complements: who-rates and how-they-rate are both biased; selection drives the initial population, audience effects drive the public-private gap
-
Do online ratings actually reflect independent customer opinions?
How much do previously-posted ratings shape the ones that come after, and does this social influence distort what ratings supposedly measure? Understanding this matters for anyone relying on review aggregates to judge product quality.
extends: audience-effect at the individual level compounds through Moe-Trusov's social-dynamics-into-future-ratings channel
-
Why do the same users rate items differently each time?
User ratings are assumed to be clean preference signals, but do they actually fluctuate unpredictably? This matters because recommender systems rely on ratings as ground truth, yet temporal inconsistency and individual rating styles may contaminate that signal.
complements: rater idiosyncrasy plus audience-shaped negativity together describe public-rating contamination
-
Why do LLMs generate polite reviews even when users hated products?
Large language models trained with RLHF develop a politeness bias that overrides negative sentiment in review generation. Understanding this bias and how to counteract it is crucial for creating accurate, user-aligned review systems.
tension with: humans default to negative-bias in public review contexts; LLMs default to positive-bias — opposite output skews from different mechanisms
-
Why do people bother writing online ratings at all?
People rate products without pay or recognition, yet do it anyway. Understanding what motivates raters—and how costs affect who rates—reveals why rating distributions may not reflect true customer satisfaction.
complements: U-shape distribution and audience-driven negativity work together — strong-opinion raters self-select and skew public
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
posters publish negativity-biased reviews in multiple-audience contexts even when private experience was positive