Recommender Systems Knowledge Retrieval and RAG

Can reinforcement learning align summarization with ranking goals?

Generic LLM summaries optimize for readability, not ranking performance. Can training summarizers with downstream relevance scores as rewards fix this misalignment and produce summaries that actually help rankers match queries?

Note · 2026-05-03 · sourced from Recommenders Architectures
What breaks when specialized AI models reach real users?

E-commerce search rankers face a length-vs-information tradeoff. Product titles are too sparse; product descriptions are too verbose for cross-encoder rankers under latency budgets. The intuitive fix is to summarize descriptions, but generic LLM summarization optimizes for "good summary" — readability, faithfulness — not for "summary that helps the ranker". A summary the LLM judges good might omit precisely the attribute the query is asking about.

Doc2Query approaches the problem by generating queries instead of summaries, but query generation also has misaligned targets: the queries are optimized to match documents, not to feed the downstream ranker. Both approaches share the issue that the learning signal isn't connected to the ranking metric.

ReLSum's contribution is to train the summarizer with reinforcement learning where the reward is the downstream relevance score the summary produces. The model learns to keep tokens that improve recall and NDCG when fed to the ranker, regardless of whether they make a summary read well. A pet food summary becomes "Taurine, non-GMO, chicken bone broth" — three attributes the ranker can match against queries — rather than a fluent paragraph the ranker can't efficiently parse. The framework optimizes the right thing because it includes the right signal, and online metrics show user engagement improvements. The principle generalizes: any intermediate text generation feeding a downstream model should be trained against that downstream model's loss, not against a generic generation objective.


Source: Recommenders Architectures

Related concepts in this collection

Concept map
14 direct connections · 94 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

RL-trained query-relevant summaries align summarization with downstream ranking — fixing the misaligned-target problem of generic LLM summarization