LLM Reasoning and Architecture Reinforcement Learning for LLMs Recommender Systems

Can one model handle both memorization and generalization?

Recommenders face a tradeoff between memorizing seen patterns and generalizing to new ones. Can a single architecture satisfy both needs without the cost of ensemble methods?

Note · 2026-05-03 · sourced from Recommenders Architectures
What breaks when specialized AI models reach real users?

A recommender has two opposed needs. Memorization captures frequent co-occurrences that historical data demonstrates — if users who installed Netflix tend to install Pandora, that conjunction should fire. Cross-product transformations on sparse features memorize beautifully but don't generalize to unseen feature pairs. Generalization, on the other hand, comes from dense embeddings that smooth across feature combinations, but those over-smooth on sparse high-rank query-item matrices, predicting positive interactions where there should be none.

Wide & Deep's claim is that you don't have to choose. The wide tower carries cross-product features for memorization; the deep tower carries embedding-based MLPs for generalization. The mechanically important detail is that they're trained jointly, not as an ensemble. In an ensemble, each model has to be reasonably accurate alone, so each must be larger. In joint training, the gradients flow through both at once: the wide part only needs to add cross-product features that complement what the deep part missed. This means the wide part is a small set of carefully chosen feature crosses rather than a full linear model.

The architecture implicitly trades off two distinct failure modes — overgeneralization on sparse data (deep alone) and inability to extend to new combinations (wide alone) — by letting each fix the other's failure mode under shared supervision.


Source: Recommenders Architectures

Related concepts in this collection

Concept map
13 direct connections · 98 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

wide and deep models combine memorization and generalization — joint training optimizes both with smaller models than ensembles require