Can reinforcement learning align summarization with ranking goals?
Generic LLM summaries optimize for readability, not ranking performance. Can training summarizers with downstream relevance scores as rewards fix this misalignment and produce summaries that actually help rankers match queries?
E-commerce search rankers face a length-vs-information tradeoff. Product titles are too sparse; product descriptions are too verbose for cross-encoder rankers under latency budgets. The intuitive fix is to summarize descriptions, but generic LLM summarization optimizes for "good summary" — readability, faithfulness — not for "summary that helps the ranker". A summary the LLM judges good might omit precisely the attribute the query is asking about.
Doc2Query approaches the problem by generating queries instead of summaries, but query generation also has misaligned targets: the queries are optimized to match documents, not to feed the downstream ranker. Both approaches share the issue that the learning signal isn't connected to the ranking metric.
ReLSum's contribution is to train the summarizer with reinforcement learning where the reward is the downstream relevance score the summary produces. The model learns to keep tokens that improve recall and NDCG when fed to the ranker, regardless of whether they make a summary read well. A pet food summary becomes "Taurine, non-GMO, chicken bone broth" — three attributes the ranker can match against queries — rather than a fluent paragraph the ranker can't efficiently parse. The framework optimizes the right thing because it includes the right signal, and online metrics show user engagement improvements. The principle generalizes: any intermediate text generation feeding a downstream model should be trained against that downstream model's loss, not against a generic generation objective.
Source: Recommenders Architectures
Related concepts in this collection
-
Can smaller models outperform their LLM teachers with enough data?
Explores whether student models trained on expanded teacher-generated labels can exceed teacher performance in production ranking tasks, and what data scale makes this possible.
complements: both align LLM output to a specific downstream task — distillation aligns scoring; ReLSum aligns summarization
-
Does LLM input augmentation beat direct LLM recommendation?
Can LLMs enrich item descriptions more effectively than making recommendations directly? This explores whether specialized models work better when LLMs focus on what they do best: content understanding rather than ranking.
extends: ReLSum is the RL-aligned version of summary-as-input-augmentation — generic LLM summary becomes ranking-aligned summary
-
Do comparisons help users evaluate items better than isolated descriptions?
Can framing product evaluations relationally—by comparing to other items—ground assessment in user reasoning better than absolute descriptions? This matters because recommendation explanations often ask users to do comparison work mentally.
complements: aspect-controlled and ranking-aligned generation are alternative LLM-output-shaping methods for downstream recommendation
-
Can we distill LLM knowledge into graphs for real-time recommendations?
E-commerce needs sub-millisecond recommendations, but LLMs are too slow. Can we extract LLM insights offline into a knowledge graph that serves requests in production without sacrificing quality or explainability?
complements: same offline-LLM-for-online-recommendation pattern — KG distillation vs ranking-aligned summarization
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
RL-trained query-relevant summaries align summarization with downstream ranking — fixing the misaligned-target problem of generic LLM summarization