Can reasoning systems maintain memory across multiple retrieval cycles?
Does integrating evidence across iterative retrieval steps—rather than treating each step independently—help systems resolve contradictions and build coherent understanding in complex narratives?
ComoRAG draws on the Prefrontal Cortex's metacognitive regulation process: reasoning is not a single retrieval action but a dynamic interplay between evidence acquisition (goal-directed memory probes) and knowledge consolidation (integrating new findings with past information). The key distinction from existing multi-step retrieval: each cycle's retrieval is informed by an evolving understanding, not executed independently.
The architecture has two components:
1. Hierarchical Knowledge Source — three layers that model text from complementary cognitive dimensions:
- Veridical layer — raw text chunks with knowledge triples for precise factual evidence (grounded recall)
- Semantic layer — GMM-clustered recursive summaries capturing thematic connections across long-range dependencies (conceptual abstraction)
- Episodic layer — sliding-window summaries capturing sequential narrative development, plot progression, and causal chains (temporal flow)
2. Metacognitive Control Loop:
- Regulatory process — reflects on current understanding state, identifies gaps, generates probing queries for new exploratory paths
- Memory workspace — integrates retrieved evidence into a global memory pool
- State evolution — the system's comprehension evolves through recognizable states (e.g., "causally incomplete" → "apparent contradiction" → "coherent context")
The practical demonstration: for "Why did Snape kill Dumbledore?", stateless multi-step retrieval retrieves contradictory facts ("Snape protects Harry" / "Snape kills Dumbledore") but cannot integrate them. ComoRAG's memory workspace evolves through contradiction detection to coherent resolution ("an act of loyalty, not betrayal") because each retrieval cycle builds on the previous cycle's understanding.
Since Can retrieval be scaled like reasoning at test time?, ComoRAG adds the statefulness dimension: CoRAG interleaves retrieval with reasoning, but ComoRAG maintains a persistent memory workspace that accumulates and integrates evidence across cycles. The memory workspace is the key differentiator — it enables the system to detect contradictions and resolve them through deeper exploration rather than treating each retrieval independently.
On benchmarks with 200K+ token contexts, ComoRAG consistently outperforms strong RAG baselines with up to 11% relative gains, particularly on complex queries requiring global comprehension.
Source: Memory
Related concepts in this collection
-
Can retrieval be scaled like reasoning at test time?
Standard RAG retrieves once, but multi-hop tasks need adaptive retrieval. Can we train models to plan retrieval chains and vary their length at test time to improve accuracy, the way test-time scaling works for reasoning?
CoRAG interleaves retrieval with reasoning; ComoRAG adds statefulness via memory workspace
-
When should retrieval actually help versus hurt reasoning?
Retrieval augmentation seems universally beneficial, but does it always improve reasoning? This explores whether some reasoning steps benefit from internal knowledge alone, and when external retrieval introduces harmful noise rather than useful information.
DeepRAG MDP formalization is complementary; ComoRAG adds the hierarchical knowledge source
-
Can community detection enable RAG systems to answer global corpus questions?
Standard RAG struggles with corpus-wide questions that require understanding overall themes rather than retrieving specific passages. Can graph community detection overcome this limitation at scale?
ComoRAG's semantic layer achieves similar global comprehension via recursive clustering rather than community detection
-
Why do reasoning systems keep discovering new connections?
Explores whether agentic graph reasoning systems maintain a special balance between semantic diversity and structural organization that enables continuous discovery of novel conceptual relationships.
both describe iterative reasoning that self-organizes toward comprehension
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
stateful narrative reasoning requires iterative evidence acquisition and knowledge consolidation via a dynamic memory workspace — not stateless multi-step retrieval