Can smaller models outperform their LLM teachers with enough data?
Explores whether student models trained on expanded teacher-generated labels can exceed teacher performance in production ranking tasks, and what data scale makes this possible.
LLMs have superior ranking quality but unaffordable latency for retail search. The standard distillation move is to train a smaller student model on the teacher's labels — but Walmart's setup adds a twist: the teacher LLM is first trained as a classification model with soft targets, and then the student is trained on a much larger dataset where the teacher labels generated unlabeled queries.
The empirical surprise: with enough augmented data, the student model outperforms the teacher. This violates the conventional distillation framing where the student approximates the teacher and accepts a quality gap as the cost of speed. Why it happens: the teacher's labels are an oracle for the student, and the augmented dataset contains query-product pairs the teacher never explicitly trained on. The student gets to see more of the input distribution than the teacher did, smoothed by the teacher's predictions, which lets it generalize better than the teacher to the actual evaluation distribution.
The architecture decision matters too. Bi-encoder retrieval allows precomputed item embeddings and approximate nearest-neighbor lookup — fast but less effective because query and item are encoded independently. Cross-encoder rerankers concatenate query and item, allowing attention across all tokens, capturing interactions a bi-encoder can't. The two-stage retrieval-then-rerank funnel uses bi-encoders to handle latency at the top of the funnel and cross-encoders (now LLM-distilled) where latency is more relaxed. The student-exceeds-teacher result was deployed in production with significantly positive metrics.
Source: Recommenders Architectures
Related concepts in this collection
-
Can we distill LLM knowledge into graphs for real-time recommendations?
E-commerce needs sub-millisecond recommendations, but LLMs are too slow. Can we extract LLM insights offline into a knowledge graph that serves requests in production without sacrificing quality or explainability?
extends: same architectural pattern (distill LLM offline, serve smaller model online) applied to KG construction rather than ranking
-
Can small language models handle most agent tasks?
Explores whether smaller, cheaper models are actually sufficient for the repetitive, scoped work that dominates deployed agent systems, rather than relying on large models by default.
exemplifies: e-commerce ranking is the scoped repetitive task where SLM-first economics applies — student-exceeds-teacher reinforces the case
-
Can reinforcement learning align summarization with ranking goals?
Generic LLM summaries optimize for readability, not ranking performance. Can training summarizers with downstream relevance scores as rewards fix this misalignment and produce summaries that actually help rankers match queries?
complements: both align LLM output to a specific downstream task — distillation aligns scoring; RL aligns summarization
-
Can semantic knowledge shift model behavior like reinforcement learning does?
Can textual descriptions of successful reasoning patterns, prepended as context, achieve the same distribution shifts that RL achieves through parameter updates? This matters because it could eliminate the need for expensive fine-tuning on limited data.
complements: both compress LLM behavior into a cheaper substrate — ranking weights vs token-prior — preserving capability at lower cost
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
distilling LLM ranking into BERT cross-encoders enables production e-commerce search — augmented unlabeled data lets the student exceed the teacher