LLM Reasoning and Architecture Language Understanding and Pragmatics Agentic and Multi-Agent Systems

Can communication pressure drive agents to learn shared abstractions?

Under what conditions do AI agents develop compact, efficient shared languages? This explores whether cooperative task pressure—rather than explicit optimization—naturally drives abstraction formation, mirroring human collaborative communication.

Note · 2026-02-23 · sourced from Cognitive Models Latent

Cognitive science has demonstrated that humans engaged in collaborative task-oriented communication tend toward higher levels of abstraction over time, enabling shorter and more information-efficient utterances. ACE (Abstractions for Communicating Efficiently) replicates this phenomenon computationally and identifies the mechanism: the need to communicate about a shared task creates natural pressure that drives abstraction formation.

The method combines three components:

  1. Library learning (symbolic) — proposing candidate abstractions from patterns observed in communication
  2. Neural communication — generating and interpreting utterances using learned abstractions
  3. Bandit algorithms — controlling the exploration-exploitation trade-off when introducing new abstractions into the shared language

The result: agents develop compact collaborative languages with shorter programs. The language size is a consequence of pressures that naturally arise through communication about a shared task — not from explicit optimization for brevity.

This connects to two existing findings about communication in AI systems. Why don't conversational AI systems mirror their users' word choices? documents that current AI systems fail to adapt their vocabulary to conversational partners. ACE shows that under the right training regime (cooperative tasks with repeated interaction), agents CAN develop shared vocabulary — the capability exists but requires the right environmental pressure.

Can we teach LLMs to form linguistic conventions in context? addresses convention formation from the training side. ACE provides the theoretical framework: abstraction learning is shaped by communication pressure, and the balance between introducing new abstractions (exploration) and using established ones (exploitation) is a core design parameter.

The cognitive science framing is important: Ho et al. (2019) identified "the need to communicate and coordinate with others" as an outstanding open problem for understanding abstraction learning. ACE demonstrates that cooperative communication IS a sufficient pressure for driving abstraction — agents don't need explicit instruction to abstract, they need a reason to communicate efficiently.


Source: Cognitive Models Latent

Related concepts in this collection

Concept map
16 direct connections · 161 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

communication pressure drives agents to develop compact shared abstractions — efficiency and informativeness are co-optimized through neurosymbolic library learning