When should retrieval actually help versus hurt reasoning?
Retrieval augmentation seems universally beneficial, but does it always improve reasoning? This explores whether some reasoning steps benefit from internal knowledge alone, and when external retrieval introduces harmful noise rather than useful information.
Retrieval augmentation is not always helpful. Some queries require external knowledge that the LLM does not have. Others require reasoning over knowledge the LLM already contains. For the second type, retrieval adds noise: potentially irrelevant retrieved documents compete with the model's correct internal representations, increasing latency without improving accuracy.
DeepRAG formalizes this as a Markov Decision Process. At each reasoning step, the model makes a binary decision: retrieve external knowledge or rely on parametric knowledge. The state is the current question and available information; the action is the decision; the reward is downstream answer accuracy. The model learns a policy for when to retrieve.
The MDP framing makes explicit what standard RAG leaves implicit: retrieval is a resource with a cost, not a free improvement. Always-retrieve is a degenerate policy that ignores the cost. Never-retrieve is a degenerate policy that ignores the benefit. Optimal policy adapts to step-level information needs.
The 21.99% accuracy improvement comes from two sources: better answers when retrieval is used (because the model retrieves more targeted subqueries), and reduced noise when retrieval is not used (because the model stops disrupting correct parametric reasoning with irrelevant retrieved content).
The connection to Does reasoning fine-tuning make models worse at declining to answer?: both findings highlight that LLMs trained with outcome rewards learn to always engage (always answer, always retrieve) rather than calibrating engagement to actual knowledge state. The MDP explicitly rewires this — abstention (use parametric knowledge) becomes an active and rewarded choice.
Source: RAG
Related concepts in this collection
-
When should retrieval happen during model generation?
Explores whether retrieval should occur continuously, at fixed intervals, or only when the model signals uncertainty. Standard RAG retrieves once; long-form generation requires dynamic triggering based on confidence signals.
complementary: FLARE uses confidence as trigger; DeepRAG uses a trained MDP policy as trigger; both target the same decision but with different mechanisms
-
Does reasoning fine-tuning make models worse at declining to answer?
When models are trained to reason better, do they lose the ability to say 'I don't know'? This matters for high-stakes applications like medical and legal AI that depend on appropriate uncertainty.
the same over-engagement failure applies to retrieval; the MDP fixes retrieval engagement just as abstention training would fix reasoning engagement
-
Can we allocate inference compute based on prompt difficulty?
Does adjusting how much compute each prompt receives—rather than using a fixed budget—improve model performance? Could smarter allocation let smaller models compete with larger ones?
adaptive allocation at the retrieval level; the MDP determines retrieval budget allocation per step
-
Why do reasoning systems keep discovering new connections?
Explores whether agentic graph reasoning systems maintain a special balance between semantic diversity and structural organization that enables continuous discovery of novel conceptual relationships.
both formalize reasoning over external knowledge as per-step optimization; DeepRAG decides whether to retrieve at each step while ComoRAG decides which graph edges to explore, both demonstrating that adaptive per-step decisions outperform uniform policies
-
Can document count be learned instead of fixed in RAG?
Standard RAG systems use a fixed number of documents regardless of query complexity. Can an RL agent learn to dynamically select both how many documents and their order based on what helps the generator produce correct answers?
complementary RL optimization in RAG: DeepRAG learns when to retrieve (per-step binary), DynamicRAG learns what to include from retrieved results (count and order); both use generator quality as reward signal
-
Does supervising retrieval steps outperform final answer rewards?
Can intermediate feedback on retrieval decisions—which documents to fetch, when to stop—train agentic RAG systems more effectively than rewarding only the final answer? This matters because poor retrieval paths can accidentally succeed or good ones can fail on noisy metrics.
DeepRAG's MDP framing provides the theoretical structure, RAG-Gym provides the training methodology: process-level rewards supervise the quality of the retrieval steps that the MDP policy selects
-
Does RL improve domain reasoning by adding knowledge or removing it?
When reinforcement learning improves reasoning in specialized domains like medicine, is it teaching models new facts or preventing them from using wrong ones? Understanding this distinction matters for how we design RL training.
the MDP's "use parametric knowledge" action is the retrieval analog of RL pruning: both suppress suboptimal engagement (unnecessary retrieval / inaccurate knowledge paths) rather than adding new capability
-
Can reasoning systems maintain memory across multiple retrieval cycles?
Does integrating evidence across iterative retrieval steps—rather than treating each step independently—help systems resolve contradictions and build coherent understanding in complex narratives?
ComoRAG adds the statefulness dimension that DeepRAG's per-step MDP lacks: while DeepRAG decides whether to retrieve at each step, ComoRAG maintains a persistent memory workspace that integrates evidence across cycles, enabling contradiction detection and resolution
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
retrieval-augmented reasoning as Markov Decision Process enables per-step parametric versus external knowledge switching