Knowledge Retrieval and RAG LLM Reasoning and Architecture

Can reasoning systems maintain memory across multiple retrieval cycles?

Does integrating evidence across iterative retrieval steps—rather than treating each step independently—help systems resolve contradictions and build coherent understanding in complex narratives?

Note · 2026-02-23 · sourced from Memory
RAG How should we allocate compute budget at inference time? How should researchers navigate LLM reasoning research?

ComoRAG draws on the Prefrontal Cortex's metacognitive regulation process: reasoning is not a single retrieval action but a dynamic interplay between evidence acquisition (goal-directed memory probes) and knowledge consolidation (integrating new findings with past information). The key distinction from existing multi-step retrieval: each cycle's retrieval is informed by an evolving understanding, not executed independently.

The architecture has two components:

1. Hierarchical Knowledge Source — three layers that model text from complementary cognitive dimensions:

2. Metacognitive Control Loop:

The practical demonstration: for "Why did Snape kill Dumbledore?", stateless multi-step retrieval retrieves contradictory facts ("Snape protects Harry" / "Snape kills Dumbledore") but cannot integrate them. ComoRAG's memory workspace evolves through contradiction detection to coherent resolution ("an act of loyalty, not betrayal") because each retrieval cycle builds on the previous cycle's understanding.

Since Can retrieval be scaled like reasoning at test time?, ComoRAG adds the statefulness dimension: CoRAG interleaves retrieval with reasoning, but ComoRAG maintains a persistent memory workspace that accumulates and integrates evidence across cycles. The memory workspace is the key differentiator — it enables the system to detect contradictions and resolve them through deeper exploration rather than treating each retrieval independently.

On benchmarks with 200K+ token contexts, ComoRAG consistently outperforms strong RAG baselines with up to 11% relative gains, particularly on complex queries requiring global comprehension.


Source: Memory

Related concepts in this collection

Concept map
15 direct connections · 160 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

stateful narrative reasoning requires iterative evidence acquisition and knowledge consolidation via a dynamic memory workspace — not stateless multi-step retrieval