Does limiting reasoning per turn improve multi-turn search quality?
When language models engage in iterative search cycles, does capping reasoning at each turn—rather than just total compute—help preserve context for subsequent retrievals and improve overall search effectiveness?
The overthinking cluster established that extended reasoning within a single query degrades accuracy beyond a critical token threshold. ASearcher extends this to multi-turn search: each turn's reasoning must also be capped, but for a different reason. In multi-turn search, the problem is not just variance inflation within one response — it is that excessive reasoning in one turn consumes context that subsequent retrieval rounds need.
The mechanism: in an iterative search cycle (query → retrieve → reason → refine query → retrieve again), each reasoning step takes up context. If turn N uses its full reasoning budget, turn N+1 has less context available to incorporate new retrieved evidence. The search agent effectively degrades its own ability to update on new information by overthinking in early turns.
This is a distinct failure mode from single-turn overthinking. Single-turn overthinking produces high variance output from one extended reasoning chain. Multi-turn overthinking produces a degraded retrieval loop where later turns are operating with less fresh evidence than they need. The fix is different: not just total compute capping, but per-turn reasoning budgets that preserve context headroom for subsequent iterations.
Since Do iterative refinement methods suffer from overthinking?, this finding places multi-turn search squarely in the same family of problems. The timescale is the retrieval cycle rather than the self-revision step, but the mechanism — sequential iteration that amplifies rather than corrects — is identical. The practical implication: DR agent design must set per-turn reasoning limits, not just overall query time limits.
Source: Deep Research
Related concepts in this collection
-
Does more thinking time always improve reasoning accuracy?
Explores whether extending a model's thinking tokens linearly improves performance, or if there's a point beyond which additional reasoning becomes counterproductive.
extends: the overthinking threshold applies within each search turn, not just in single-turn reasoning
-
Do iterative refinement methods suffer from overthinking?
Iterative refinement approaches like Self-Refine structurally resemble token-level overthinking in o1-like models. Does revision across multiple inference calls reproduce the same accuracy degradation seen within single inferences?
grounds: ASearcher is the retrieval-domain instance of this synthesis insight; multi-turn search is the operational context
-
Does extended thinking actually improve reasoning or just increase variance?
When models think longer, do they reason better, or do they simply sample from a wider distribution of outputs that happens to cover correct answers more often? This matters because it determines whether test-time compute is genuinely scaling reasoning capability.
extends: per-turn variance inflation compounds across retrieval iterations, not just within one response
-
Why does vanilla RAG produce shallow and redundant results?
Standard RAG systems get stuck in a single semantic neighborhood because their initial query determines what documents are discoverable. The question asks whether fixed retrieval strategies fundamentally limit knowledge depth compared to iterative exploration.
design constraint complement: OmniThink solves retrieval scope via reflection-expansion; this note solves per-turn depth via reasoning budgets; complete iterative retrieval design requires both
-
Can retrieval be scaled like reasoning at test time?
Standard RAG retrieves once, but multi-hop tasks need adaptive retrieval. Can we train models to plan retrieval chains and vary their length at test time to improve accuracy, the way test-time scaling works for reasoning?
CoRAG's tree search offers a structural alternative: instead of sequential deepening that consumes context across turns, branch retrieval chains in parallel and aggregate; best-of-N sampling over retrieval chains avoids the per-turn context pressure
-
Can reinforcement learning scale beyond single-turn language tasks?
Most RL for LLMs targets simple single-turn problems. This research asks whether RL can handle multi-turn interactive environments with sparse rewards and rich environmental feedback, like real software engineering tasks.
validates: SWE-RL shows RL can learn per-turn discipline through training rather than inference-time limiting; the SWE domain's rich intermediate feedback (compiler traces, test logs) enables RL to discover the same per-turn budgeting that ASearcher imposes architecturally
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
long-horizon research tasks require limiting reasoning steps per turn not just total compute because unrestricted thinking degrades iterative search quality