Can a coordination layer turn LLM patterns into genuine reasoning?
LLMs excel at pattern retrieval but lack external constraint binding. Can a System 2 coordination layer—anchoring outputs to goals and evidence—transform statistical associations into goal-directed reasoning?
The AI community's debate between "scale LLMs to AGI" and "LLMs are a dead end" relies on a false dichotomy. MACI proposes a third position: LLMs are the necessary System 1 substrate (the pattern repository), but the bottleneck is a missing System 2 coordination layer that binds patterns to external constraints, verifies outputs, and maintains state over time.
A fishing metaphor clarifies: the ocean is the model's vast pattern repository. Casting without bait catches the maximum likelihood prior — common fish (generic outputs). Intelligent behavior requires baiting (conveying intent) and filtering (discarding bad catches). If bait density is too sparse, the prior dominates. If sufficient, it shifts the posterior toward the target. But bait is not free — excessive context is inefficient. The missing layer optimizes this tradeoff.
UCCT (Unified Coordinate of Cognitive Transition) formalizes this as a phase transition governed by three variables:
- Effective support (ρd): density of anchoring evidence
- Representational mismatch (dr): gap between retrieval and target semantics
- Adaptive anchoring budget (γ log k): penalizes unbounded context to prevent signal dilution
Ungrounded generation = unbaited cast = maximum likelihood prior. "Reasoning" emerges when sufficient anchors shift the posterior past a threshold — a phase transition, not a gradual improvement.
Three coordination mechanisms operationalize this in the MACI stack:
- Baiting (behavior-modulated debate): Agents' stance strength adapts to evidence — not fixed advocacy but dynamic explore-vs-consolidate
- Filtering (Socratic judging via CRIT): A judge evaluates arguments on clarity, consistency, evidential grounding, and falsifiability — independent of stance. Low-scoring arguments are rejected or returned with targeted Socratic queries
- Persistence (transactional memory): State maintained across debate rounds
The CRIT judge addresses a specific failure in When does debate actually improve reasoning accuracy?: debate alone is insufficient if agents generate vague, inconsistent, or rhetorically fluent but unsupported claims. CRIT gates communication — only well-formed arguments enter shared state.
The deeper claim: a few examples can rebind an entire model — ICL as phase transition rather than gradual learning. This makes the large pattern repository a feature, not a bug: it's what makes threshold-driven reconfiguration possible.
Source: Novel Architectures
Related concepts in this collection
-
When does debate actually improve reasoning accuracy?
Multi-agent debate shows promise for reasoning tasks, but under what conditions does it help versus hurt? The research explores whether debate amplifies errors when evidence verification is missing.
MACI's CRIT judge directly addresses the amplification problem
-
Why do AI systems agree when they should disagree?
When multi-agent AI systems are designed to improve through disagreement, why do they converge on consensus instead? What breaks the deliberation process?
MACI's behavior modulation prevents premature convergence
-
Why do multi-agent LLM systems converge without real debate?
When multiple AI agents reason together, do they genuinely deliberate or just accommodate each other's views? Research into clinical reasoning systems reveals how often agents reach agreement without substantive disagreement.
CRIT forces genuine deliberation by filtering ill-posed arguments
-
Why do people trust AI outputs they shouldn't?
When do human cognitive shortcuts fail in AI interaction? Three compounding traps—treating statistical patterns as facts, mistaking fluency for understanding, and avoiding disagreement—may explain systematic overreliance across languages and contexts.
MACI's System 1/System 2 framing is architecturally operationalized here
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
LLMs are system 1 substrate — AGI requires a system 2 coordination layer that binds patterns to external constraints