Agentic and Multi-Agent Systems LLM Reasoning and Architecture Language Understanding and Pragmatics

Can agents share thoughts directly without using language?

Explores whether multi-agent systems can communicate by exchanging latent thoughts extracted from hidden states, bypassing the ambiguity and misalignment problems inherent in natural language.

Note · 2026-02-23 · sourced from Cognitive Models Latent

Natural language is inherently sequential, ambiguous, and imprecise — an indirect reflection of thought. Existing multi-agent LLM systems communicate via tokens or embeddings, inheriting all of language's limitations. Empirical analyses confirm that many inter-agent collaboration failures stem from vague message specification and inter-agent misalignment, both caused by the indirect nature of language-based communication.

Thought Communication proposes a fundamentally different paradigm: agents share latent thoughts directly, extracted from their hidden states. The formalization treats agent states as generated from latent thoughts through an unknown function, then proves that both shared and private latent thoughts between any agent pair can be identified from observations alone.

Theoretical foundation: In a nonparametric setting without auxiliary information, the framework guarantees recovery of (1) individual latent thoughts, (2) the distinction between shared and private thoughts, and (3) the global structure of thought sharing — which agents share which thoughts and how. This identifiability result ensures recovered representations reflect genuine internal reasoning structure.

Practical implementation: A sparsity-regularized autoencoder extracts latent thoughts from agent hidden states. Each agent receives inferred thoughts plus the structure of how each thought is shared across agents. Agents can reason not just about what others think but about which thoughts are mutually held versus privately maintained.

Why this matters beyond efficiency: The paradigm doesn't just speed up communication — it changes what can be communicated. Since Why do speakers deliberately use ambiguous language?, natural language preserves useful ambiguity. But in multi-agent reasoning, where Why do multi-agent LLM systems converge without real debate?, ambiguity enables premature convergence. Direct thought sharing could allow agents to detect alignment or conflict at the representational level before it manifests in language — potentially addressing the silent agreement problem at its root.

The connection to Can multiple LLMs coordinate without explicit collaboration rules? is structural: Hogwild! Inference shows emergent coordination through shared computational context; Thought Communication formalizes what is being shared and provides theoretical guarantees for the extraction. The two approaches are complementary — shared KV cache for implicit coordination, thought extraction for explicit coordination.

LatentMAS: training-free alternative via KV-cache working memory (from Arxiv/Agents Multi Architecture): LatentMAS achieves a critically different mechanism from Thought Communication. Rather than using a trained sparse autoencoder to extract shared/private latent thoughts with identifiability guarantees, LatentMAS is entirely training-free — agents generate thoughts as auto-regressive last-layer hidden embeddings and exchange information via shared layer-wise KV caches. The results are striking: up to 14.6% accuracy improvement, 70-84% token reduction, and 4-4.3x faster inference across 9 benchmarks — all without training. The approaches are complementary: Thought Communication for explicit, controlled sharing with theoretical guarantees; LatentMAS for efficient, training-free implicit sharing with practical performance gains. See Can agents share thoughts without converting them to text?.


Source: Cognitive Models Latent

Related concepts in this collection

Concept map
14 direct connections · 159 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

thought communication enables multi-agent collaboration through direct latent thought sharing that bypasses language bottlenecks