Can query-time graph construction replace pre-built knowledge graphs?
Does building dependency graphs from individual queries at inference time offer a more flexible and cost-effective alternative to constructing knowledge graphs over entire document collections upfront?
GraphRAG builds a knowledge graph over the entire corpus before any query is served. The graph captures relationships between entities in documents, enabling multi-hop reasoning by traversal. Performance on complex tasks is strong. But the pre-building cost is prohibitive: token overhead for graph construction, latency for updating as the corpus evolves, and the graph being LLM-generated may include irrelevant or redundant relationships.
Worse: the pre-built graph is static. Real-world queries vary in type and complexity, requiring different logic structures for accurate reasoning. A graph built for financial reporting queries may not support the traversal patterns needed for medical diagnosis queries on the same corpus.
LogicRAG inverts this: instead of building a graph over the corpus, build a graph over the query at inference time. Decompose the query into subproblems, construct a directed acyclic graph (DAG) encoding logical dependencies between subproblems, topologically sort for execution order, resolve each subproblem via retrieval conditioned on previously resolved subproblems.
The result: retrieval plans that match the logical structure of the specific query. A multi-hop question about "which policies introduced by this figure were later reversed?" generates a DAG with specific dependency edges that no pre-built graph would have pre-encoded. Context pruning (LLM-based summarization of retrieved content) and graph pruning (merging semantically similar subproblems) reduce token overhead.
This generalizes Do hierarchical retrieval architectures outperform flat ones on complex queries? — the same separation principle, implemented dynamically at the query level rather than architecturally.
Source: RAG
Related concepts in this collection
-
Do hierarchical retrieval architectures outperform flat ones on complex queries?
Explores whether separating query planning from answer synthesis into distinct architectural components improves performance on multi-hop retrieval tasks compared to unified single-pass approaches.
LogicRAG is a concrete mechanism for the "query planning" step; DAG construction makes the planning structure explicit and executable
-
When do graph databases outperform vector embeddings for retrieval?
Vector similarity struggles with aggregate and relational queries that require traversing multiple entity connections. Can graph-oriented databases with deterministic queries solve this failure mode in enterprise domain applications?
pre-built graph approach; LogicRAG offers the relational reasoning benefit without the build cost by constructing query-specific logic at inference time
-
Can externalizing reasoning into knowledge graphs help smaller models compete?
Can structuring LLM reasoning as explicit knowledge graph triples enable smaller, cheaper models to solve complex tasks more effectively? This matters because it could make advanced reasoning accessible without scaling model size.
KGoT and LogicRAG are complementary inference-time graph construction approaches for different purposes: LogicRAG builds query-dependency DAGs for structured retrieval planning, KGoT builds reasoning-trace KGs for externalized computation; both demonstrate that graph structure adds value over flat attention-based reasoning at different points in the pipeline
-
Can routing queries to task-matched structures improve RAG reasoning?
Does matching retrieval structure type to task demands—tables for analysis, graphs for inference, algorithms for planning—improve reasoning accuracy over uniform chunk retrieval? This explores whether cognitive fit principles from human learning transfer to AI systems.
LogicRAG's query DAG is the "graph" option in StructRAG's five-structure routing space; cognitive fit theory provides the theoretical grounding for when DAG-structured retrieval outperforms chunk-based retrieval
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
inference-time query logic graphs avoid the cost and inflexibility of pre-built knowledge graph rag