ACE: Abstractions for Communicating Efficiently

Paper · arXiv 2409.20120 · Published September 30, 2024
Cognitive Models LatentLinguistics, NLP, NLU

A central but unresolved aspect of problem-solving in AI is the capability to introduce and use abstractions, something humans excel at. Work in cognitive science has demonstrated that humans tend towards higher levels of abstraction when engaged in collaborative task-oriented communication, enabling gradually shorter and more information-efficient utterances. Several computational methods have attempted to replicate this phenomenon, but all make unrealistic simplifying assumptions about how abstractions are introduced and learned. Our method, Abstractions for Communicating Efficiently (ACE), overcomes these limitations through a neurosymbolic approach. On the symbolic side, we draw on work from library learning for proposing abstractions. We combine this with neural methods for communication and reinforcement learning, via a novel use of bandit algorithms for controlling the exploration and exploitation trade-off in introducing new abstractions.

Humans use abstractions in many procedural tasks such as cooking, building shapes, and programming. In a seminal paper by Ho et al. (2019) that combines perspectives from Cognitive Science, AI and reinforcement learning, it is argued that abstractions guide learning, facilitate trade-offs, and simplify computation. They identified an outstanding open problem that “future work in both computer science and psychology will need to identify other pressures that can shape abstraction learning – such as the need to communicate and coordinate with others”. We carry out the first steps to solving this problem in the setting of cooperative artificial agents. When presented with the opportunity for repeated interaction, human conversational partners tend towards more concise utterances (Hawkins, Frank, and Goodman 2020; McCarthy et al. 2021; Krauss and Weinheimer 1964). This tendency towards brevity is facilitated by the introduction of new, more informative abstractions into their shared language, which enables shorter utterances and thereby promotes more efficient communication and cooperation.

Human languages are subject to pressures to be informative and to minimise cognitive load (i.e. to have a small language). In previous work, emergent communication methods have already been used to explain the formation and structure of human languages in domain such as numerals

We have demonstrated that ACE exhibits the capability of developing a compact collaborative language, resulting in shorter programs. The size of the language is the consequence of pressures that naturally arise through the need to communicate about a shared task. This is achieved by ACE through a novel combination of EC, library learning and bandits. Our work serves as a bridge between abstraction learning and efficient communication (Kemp and Regier 2012; Gibson et al. 2017; Zaslavsky et al. 2019; Gibson et al. 2019; Denic and Szymanik 2024).