Can retrieval learn what actually helps answer questions?
Standard RAG trains retrievers to find similar documents and generators to produce answers separately. But does surface similarity match what genuinely helps generate correct responses? This explores whether retrieval can receive feedback from answer quality.
Standard RAG trains the retriever and generator separately. The retriever optimizes for document relevance — returning chunks that look like what was asked. The generator optimizes for answer quality — producing correct, coherent responses from whatever the retriever provides. The two objectives are decoupled, which means the retriever can learn to retrieve documents that are semantically similar but not actually useful for answering.
The fundamental problem: the retriever cannot receive a gradient signal from the generator without a differentiable interface between them. Text is discrete — you cannot backpropagate through "select these k chunks from a vocabulary of millions."
CLaRa (Continuous Latent Reasoning) solves this with shared continuous document representations. Documents are encoded once into compact memory-token vectors. The reranker and generator both operate in this continuous space. During training, the next-token prediction loss from the generator propagates back through both modules via a differentiable top-k estimator. The retriever learns from the generator's success and failure.
The learned alignment: the retriever stops optimizing for surface similarity and starts optimizing for "does including this document improve the answer?" Documents that look relevant but do not contribute get deprioritized. Documents that seem tangential but bridge a reasoning gap get upweighted.
This matters because the gap between "similar to query" and "useful for generating the answer" is large in practice. Retrieval trained on human relevance labels is approximating what humans think is relevant. Retrieval trained on generation loss is learning what is actually useful for the downstream task.
Source: RAG
Related concepts in this collection
-
Can document count be learned instead of fixed in RAG?
Standard RAG systems use a fixed number of documents regardless of query complexity. Can an RL agent learn to dynamically select both how many documents and their order based on what helps the generator produce correct answers?
DynamicRAG uses RL with generator output as reward signal; CLaRa uses differentiable joint training; both solve the retrieval-generation alignment problem via generator feedback
-
Do language models actually use their encoded knowledge?
Probes can detect that LMs encode facts internally, but do those encoded facts causally influence what the model generates? This explores the gap between knowing and doing.
the decoupled optimization problem in standard RAG is an instance of this: retrieval encodes information that does not causally improve generation; CLaRa is a fix
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
joint optimization of retriever and generator through shared continuous representations aligns retrieval with answer quality