Recommender Systems

Does embedding dimensionality secretly drive popularity bias in recommenders?

Conventional wisdom treats low-dimensional models as overfitting protection. But does this practice inadvertently cause recommenders to systematically favor popular items, reducing diversity and fairness regardless of the optimization metric used?

Note · 2026-05-03 · sourced from Recommenders General
What breaks when specialized AI models reach real users? How do recommendation feeds shape what people see and believe?

Standard ML practice treats low-dimensional models as a hedge against overfitting. Smaller hidden layers, smaller embedding sizes, fewer parameters — all traditional ways to fight memorization of training noise. Naoto Ohsaka and Riku Togashi's argument is that in recommender systems this prescription has a long-term side effect that conventional model selection misses: low-dimensional dot-product models systematically overfit toward popularity bias.

When the user/item embedding dimension is too small to delineate individual tastes, the model's best response under ranking-quality optimization is to push everyone toward popular items. Popular items get recommended to more people than their preferences justify. This produces nondiverse and unfair recommendations regardless of the optimization metric. Worse, it creates insufficient exposure data for less popular items, so the next training round has even thinner signal on niche taste, compounding the bias.

The dimensionality of user/item embeddings is treated as relatively unnoticed compared to learning rates or regularization. Developers select models on ranking quality alone, choose low-dimensional models for space cost efficiency, and discover the diversity collapse only after deployment. Even when developers select on both ranking quality and diversity, the versatility is severely limited if the dimensionality is tuned over a narrow range.

The actionable point: embedding dimension is a fairness/diversity hyperparameter, not just a memory/capacity hyperparameter. Setting it low to save space is implicitly choosing popularity bias. The trade-off needs to be made explicit during model design, not patched in post hoc with diversity re-rankers.


Source: Recommenders General

Related concepts in this collection

Concept map
14 direct connections · 72 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

low-dimensional embeddings cause long-term unfairness through popularity overfitting — diversity follows from dimensionality