Psychology and Social Cognition Agentic and Multi-Agent Systems LLM Reasoning and Architecture

Can branching prompts replicate what multi-agent systems do?

Explores whether non-linear prompting structures (tree-of-thought, debate prompting) can functionally replace multi-agent architectures, and whether a single LLM simulating multiple personas achieves the same cognitive benefits as multiple models collaborating.

Note · 2026-02-23 · sourced from Agents Multi
Why can't AI models lead conversations on their own? What makes multi-agent teams actually perform better?

The Agent-Centric Projection paper (2025) introduces a distinction between linear contexts (single continuous interaction sequence) and non-linear contexts (branching or multi-path) in LLM systems, then proposes three conjectures based on this framework:

  1. Results from non-linear prompting techniques can predict outcomes in equivalent multi-agent systems
  2. Multi-agent system architectures can be replicated through single-LLM prompting techniques that simulate equivalent interaction patterns
  3. These equivalences suggest novel approaches for generating synthetic training data

If conjecture 2 holds, the entire multi-agent literature becomes a source of prompting strategies — and the prompting literature becomes a source of multi-agent architectures. The mapping is structural: any non-linear prompt structure (tree-of-thought, graph-of-thought, debate-structured prompting) has a multi-agent analog, and vice versa.

Solo Performance Prompting (SPP) provides empirical support. A single LLM dynamically identifies and simulates multiple personas to achieve "cognitive synergy" — collaborating with itself in multiple roles without requiring multiple model instances. Fine-grained personas (dynamically identified per task) outperform fixed or single personas. This is conjecture 2 in practice: a single LLM replicating a multi-agent debate architecture through structured prompting.

The synthetic data implication (conjecture 3) is practical: if prompting techniques and multi-agent interactions produce equivalent dynamics, then multi-agent interaction transcripts become training data for single-model non-linear reasoning, and vice versa. Since Does training on messy search processes improve reasoning?, the messy interaction transcripts from multi-agent debate may be more valuable training data than clean single-agent outputs.

The open question: does the equivalence hold at scale? Multi-agent systems with truly different base models introduce diversity that single-LLM persona simulation cannot — because all personas share the same weights and therefore the same biases.


Source: Agents Multi

Related concepts in this collection

Concept map
15 direct connections · 164 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

non-linear prompting contexts are functionally equivalent to multi-agent systems — implying bidirectional prediction and novel synthetic data generation