Does embedding dimensionality secretly drive popularity bias in recommenders?
Conventional wisdom treats low-dimensional models as overfitting protection. But does this practice inadvertently cause recommenders to systematically favor popular items, reducing diversity and fairness regardless of the optimization metric used?
Standard ML practice treats low-dimensional models as a hedge against overfitting. Smaller hidden layers, smaller embedding sizes, fewer parameters — all traditional ways to fight memorization of training noise. Naoto Ohsaka and Riku Togashi's argument is that in recommender systems this prescription has a long-term side effect that conventional model selection misses: low-dimensional dot-product models systematically overfit toward popularity bias.
When the user/item embedding dimension is too small to delineate individual tastes, the model's best response under ranking-quality optimization is to push everyone toward popular items. Popular items get recommended to more people than their preferences justify. This produces nondiverse and unfair recommendations regardless of the optimization metric. Worse, it creates insufficient exposure data for less popular items, so the next training round has even thinner signal on niche taste, compounding the bias.
The dimensionality of user/item embeddings is treated as relatively unnoticed compared to learning rates or regularization. Developers select models on ranking quality alone, choose low-dimensional models for space cost efficiency, and discover the diversity collapse only after deployment. Even when developers select on both ranking quality and diversity, the versatility is severely limited if the dimensionality is tuned over a narrow range.
The actionable point: embedding dimension is a fairness/diversity hyperparameter, not just a memory/capacity hyperparameter. Setting it low to save space is implicitly choosing popularity bias. The trade-off needs to be made explicit during model design, not patched in post hoc with diversity re-rankers.
Source: Recommenders General
Related concepts in this collection
-
Why do recommender systems struggle to balance accuracy and diversity?
Recommender systems treat accuracy and diversity as competing objectives, requiring separate tuning. But what if the conflict is artificial, stemming from how we measure success rather than a fundamental tension?
extends: dimensionality is one mechanism behind the accuracy-diversity tradeoff — low dimensions can't represent diverse interests
-
Why do accuracy-optimized recommenders crowd out minority interests?
Explores why recommendation models that maximize accuracy systematically over-represent a user's dominant interests while suppressing their lesser ones, even when both are measurable and real.
complements: dimension-induced popularity overfitting is the model-level cause; calibration is the post-hoc fix
-
Do hash collisions really harm popular recommendation items?
Hash-based embedding tables assume uniform ID distribution, but real recommender systems show heavy-tailed frequency patterns. The question explores whether collisions actually concentrate damage on the high-traffic entities that matter most.
complements: both are skewed-distribution failures at the embedding layer — collisions concentrate on heavy items, dimensions overfit to popular ones
-
How can user vectors capture diverse interests without exploding in size?
Fixed-length user vectors compress all interests into one representation, losing information about varied tastes. Can we represent diverse interests efficiently without expanding dimensionality?
complements: same dimension-bottleneck diagnosis at the user side — DIN's candidate-conditional activation is one workaround
-
Do different recommender types shape opinion convergence differently?
Explores whether the mechanism by which products are recommended—buying together versus viewing together—creates distinct patterns in how product ratings converge or diverge across a network.
extends: dimension-induced popularity overfitting connects to opinion-convergence dynamics at population level
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
low-dimensional embeddings cause long-term unfairness through popularity overfitting — diversity follows from dimensionality