Knowledge Retrieval and RAG

Can retrieval learn what actually helps answer questions?

Standard RAG trains retrievers to find similar documents and generators to produce answers separately. But does surface similarity match what genuinely helps generate correct responses? This explores whether retrieval can receive feedback from answer quality.

Note · 2026-02-22 · sourced from RAG
RAG How should researchers navigate LLM reasoning research?

Standard RAG trains the retriever and generator separately. The retriever optimizes for document relevance — returning chunks that look like what was asked. The generator optimizes for answer quality — producing correct, coherent responses from whatever the retriever provides. The two objectives are decoupled, which means the retriever can learn to retrieve documents that are semantically similar but not actually useful for answering.

The fundamental problem: the retriever cannot receive a gradient signal from the generator without a differentiable interface between them. Text is discrete — you cannot backpropagate through "select these k chunks from a vocabulary of millions."

CLaRa (Continuous Latent Reasoning) solves this with shared continuous document representations. Documents are encoded once into compact memory-token vectors. The reranker and generator both operate in this continuous space. During training, the next-token prediction loss from the generator propagates back through both modules via a differentiable top-k estimator. The retriever learns from the generator's success and failure.

The learned alignment: the retriever stops optimizing for surface similarity and starts optimizing for "does including this document improve the answer?" Documents that look relevant but do not contribute get deprioritized. Documents that seem tangential but bridge a reasoning gap get upweighted.

This matters because the gap between "similar to query" and "useful for generating the answer" is large in practice. Retrieval trained on human relevance labels is approximating what humans think is relevant. Retrieval trained on generation loss is learning what is actually useful for the downstream task.


Source: RAG

Related concepts in this collection

Concept map
13 direct connections · 112 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

joint optimization of retriever and generator through shared continuous representations aligns retrieval with answer quality