Can communication pressure drive agents to learn shared abstractions?
Under what conditions do AI agents develop compact, efficient shared languages? This explores whether cooperative task pressure—rather than explicit optimization—naturally drives abstraction formation, mirroring human collaborative communication.
Cognitive science has demonstrated that humans engaged in collaborative task-oriented communication tend toward higher levels of abstraction over time, enabling shorter and more information-efficient utterances. ACE (Abstractions for Communicating Efficiently) replicates this phenomenon computationally and identifies the mechanism: the need to communicate about a shared task creates natural pressure that drives abstraction formation.
The method combines three components:
- Library learning (symbolic) — proposing candidate abstractions from patterns observed in communication
- Neural communication — generating and interpreting utterances using learned abstractions
- Bandit algorithms — controlling the exploration-exploitation trade-off when introducing new abstractions into the shared language
The result: agents develop compact collaborative languages with shorter programs. The language size is a consequence of pressures that naturally arise through communication about a shared task — not from explicit optimization for brevity.
This connects to two existing findings about communication in AI systems. Why don't conversational AI systems mirror their users' word choices? documents that current AI systems fail to adapt their vocabulary to conversational partners. ACE shows that under the right training regime (cooperative tasks with repeated interaction), agents CAN develop shared vocabulary — the capability exists but requires the right environmental pressure.
Can we teach LLMs to form linguistic conventions in context? addresses convention formation from the training side. ACE provides the theoretical framework: abstraction learning is shaped by communication pressure, and the balance between introducing new abstractions (exploration) and using established ones (exploitation) is a core design parameter.
The cognitive science framing is important: Ho et al. (2019) identified "the need to communicate and coordinate with others" as an outstanding open problem for understanding abstraction learning. ACE demonstrates that cooperative communication IS a sufficient pressure for driving abstraction — agents don't need explicit instruction to abstract, they need a reason to communicate efficiently.
Source: Cognitive Models Latent
Related concepts in this collection
-
Why don't conversational AI systems mirror their users' word choices?
Explores whether current dialogue models exhibit lexical entrainment—the human tendency to align vocabulary with conversation partners—and what's needed to bridge this gap in AI communication.
ACE shows agents CAN develop shared vocabulary under cooperative pressure; current AI lacks this because the pressure is absent
-
Can we teach LLMs to form linguistic conventions in context?
Humans naturally shorten references as conversations progress, but LLMs don't adapt their language for efficiency even when they understand their partners do. Can training on coreference patterns teach this convention-forming behavior?
training-side fix for convention formation; ACE provides the theoretical mechanism
-
Why don't LLMs shorten messages like humans do?
Humans naturally develop shorter, efficient language during conversations. Do multimodal LLMs exhibit this same spontaneous adaptation, or do they lack this communicative behavior?
convergent problem: LLMs understand efficiency but don't produce it; ACE shows the missing ingredient is cooperative pressure
-
Can language help agents imagine goals they've never seen?
How might compositional language enable artificial agents to target outcomes beyond their training experience? This matters because it could unlock open-ended exploration without hand-coded reward functions.
IMAGINE shows the downstream consequence of communication-driven abstraction: once agents develop compact shared abstractions through cooperative pressure, language compositionality enables them to recombine these abstractions into novel goals they have never experienced
-
Can agents learn continuously without forgetting old skills?
Can lifelong learning systems retain previously acquired skills while acquiring new ones? This explores whether externalizing learned behaviors as retrievable code programs rather than parameter updates solves catastrophic forgetting.
VOYAGER's skill libraries are the behavioral analog of ACE's communicative abstractions: both develop reusable composable units under performance pressure; the shared mechanism is that cooperative task demands drive agents toward compact, composable representations
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
communication pressure drives agents to develop compact shared abstractions — efficiency and informativeness are co-optimized through neurosymbolic library learning