Recommender Systems Knowledge Retrieval and RAG

Can smaller models outperform their LLM teachers with enough data?

Explores whether student models trained on expanded teacher-generated labels can exceed teacher performance in production ranking tasks, and what data scale makes this possible.

Note · 2026-05-03 · sourced from Recommenders Architectures
What breaks when specialized AI models reach real users?

LLMs have superior ranking quality but unaffordable latency for retail search. The standard distillation move is to train a smaller student model on the teacher's labels — but Walmart's setup adds a twist: the teacher LLM is first trained as a classification model with soft targets, and then the student is trained on a much larger dataset where the teacher labels generated unlabeled queries.

The empirical surprise: with enough augmented data, the student model outperforms the teacher. This violates the conventional distillation framing where the student approximates the teacher and accepts a quality gap as the cost of speed. Why it happens: the teacher's labels are an oracle for the student, and the augmented dataset contains query-product pairs the teacher never explicitly trained on. The student gets to see more of the input distribution than the teacher did, smoothed by the teacher's predictions, which lets it generalize better than the teacher to the actual evaluation distribution.

The architecture decision matters too. Bi-encoder retrieval allows precomputed item embeddings and approximate nearest-neighbor lookup — fast but less effective because query and item are encoded independently. Cross-encoder rerankers concatenate query and item, allowing attention across all tokens, capturing interactions a bi-encoder can't. The two-stage retrieval-then-rerank funnel uses bi-encoders to handle latency at the top of the funnel and cross-encoders (now LLM-distilled) where latency is more relaxed. The student-exceeds-teacher result was deployed in production with significantly positive metrics.


Source: Recommenders Architectures

Related concepts in this collection

Concept map
14 direct connections · 116 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

distilling LLM ranking into BERT cross-encoders enables production e-commerce search — augmented unlabeled data lets the student exceed the teacher