LLM Reasoning and Architecture Agentic and Multi-Agent Systems Language Understanding and Pragmatics

Can a coordination layer turn LLM patterns into genuine reasoning?

LLMs excel at pattern retrieval but lack external constraint binding. Can a System 2 coordination layer—anchoring outputs to goals and evidence—transform statistical associations into goal-directed reasoning?

Note · 2026-02-23 · sourced from Novel Architectures

The AI community's debate between "scale LLMs to AGI" and "LLMs are a dead end" relies on a false dichotomy. MACI proposes a third position: LLMs are the necessary System 1 substrate (the pattern repository), but the bottleneck is a missing System 2 coordination layer that binds patterns to external constraints, verifies outputs, and maintains state over time.

A fishing metaphor clarifies: the ocean is the model's vast pattern repository. Casting without bait catches the maximum likelihood prior — common fish (generic outputs). Intelligent behavior requires baiting (conveying intent) and filtering (discarding bad catches). If bait density is too sparse, the prior dominates. If sufficient, it shifts the posterior toward the target. But bait is not free — excessive context is inefficient. The missing layer optimizes this tradeoff.

UCCT (Unified Coordinate of Cognitive Transition) formalizes this as a phase transition governed by three variables:

Ungrounded generation = unbaited cast = maximum likelihood prior. "Reasoning" emerges when sufficient anchors shift the posterior past a threshold — a phase transition, not a gradual improvement.

Three coordination mechanisms operationalize this in the MACI stack:

  1. Baiting (behavior-modulated debate): Agents' stance strength adapts to evidence — not fixed advocacy but dynamic explore-vs-consolidate
  2. Filtering (Socratic judging via CRIT): A judge evaluates arguments on clarity, consistency, evidential grounding, and falsifiability — independent of stance. Low-scoring arguments are rejected or returned with targeted Socratic queries
  3. Persistence (transactional memory): State maintained across debate rounds

The CRIT judge addresses a specific failure in When does debate actually improve reasoning accuracy?: debate alone is insufficient if agents generate vague, inconsistent, or rhetorically fluent but unsupported claims. CRIT gates communication — only well-formed arguments enter shared state.

The deeper claim: a few examples can rebind an entire model — ICL as phase transition rather than gradual learning. This makes the large pattern repository a feature, not a bug: it's what makes threshold-driven reconfiguration possible.


Source: Novel Architectures

Related concepts in this collection

Concept map
13 direct connections · 138 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLMs are system 1 substrate — AGI requires a system 2 coordination layer that binds patterns to external constraints