Knowledge Retrieval and RAG Reinforcement Learning for LLMs LLM Reasoning and Architecture

Can retrieval be scaled like reasoning at test time?

Standard RAG retrieves once, but multi-hop tasks need adaptive retrieval. Can we train models to plan retrieval chains and vary their length at test time to improve accuracy, the way test-time scaling works for reasoning?

Note · 2026-02-22 · sourced from RAG
RAG How should we allocate compute budget at inference time? How should researchers navigate LLM reasoning research?

Standard RAG retrieves once and generates from what was found. Multi-hop reasoning tasks require information that can only be identified after retrieving and processing earlier information. The retriever, constrained to a single shot, cannot know what intermediate steps will reveal are needed.

CoRAG (Chain-of-Retrieval Augmented Generation) extends the chain-of-thought training paradigm to retrieval. Training: use rejection sampling to automatically generate intermediate retrieval chains — sequences of queries, retrieved documents, and intermediate answers — augmenting existing RAG datasets that only provide final answers. The model learns to plan retrieval steps, not just generate from retrieved context.

Test time: the retrieval chain length and count become dials. Greedy decoding (single chain) is fast. Best-of-N sampling (multiple chains) improves accuracy. Tree search (branching at each retrieval decision) maximizes accuracy at higher cost. The same token budget can be spent as retrieval steps, choosing depth vs. breadth at test time.

The scaling relationship is the same as in reasoning: more retrieval budget yields better answers, up to a point. This extends Does search budget scale like reasoning tokens for answer quality? from agentic search behavior to explicitly trained retrieval models. The TTS framework is not about reasoning tokens specifically — it is about compute allocation in any iterative process.

The practical implication: RAG systems can now have a compute dial. Low-latency, low-cost serving uses greedy decoding. High-stakes queries use tree search. The dial was not available in single-shot RAG.


Source: RAG

Related concepts in this collection

Concept map
17 direct connections · 153 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

chain-of-retrieval augmented generation enables test-time scaling for retrieval-intensive tasks