Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey

Paper · arXiv 2012.09830 · Published December 17, 2020
Cognitive Models LatentEvolutionSelf Refinement Self Consistency FeedbackDeep ResearchTasks Planning

Building autonomous machines that can explore open-ended environments, discover possible interactions and build repertoires of skills is a general objective of artificial intelligence. Developmental approaches argue that this can only be achieved by autotelic agents: intrinsically motivated learning agents that can learn to represent, generate, select and solve their own problems. In recent years, the convergence of developmental approaches with deep reinforcement learning (rl) methods has been leading to the emergence of a new field: developmental reinforcement learning. Developmental rl is concerned with the use of deep rl algorithms to tackle a developmental problem—the intrinsically motivated acquisition of open-ended repertoires of skills. The self-generation of goals requires the learning of compact goal encodings as well as their associated goal-achievement functions. This raises new challenges compared to standard rl algorithms originally designed to tackle pre-defined sets of goals using external reward signals.

We can think of two approaches to this problem: developmental approaches, in particular developmental robotics, and reinforcement learning (rl). Developmental robotics takes inspirations from artificial intelligence, developmental psychology and neuroscience to model cognitive processes in natural and artificial systems (Asada et al., 2009; Cangelosi & Schlesinger, 2015). Following the idea that intelligence should be embodied, robots are often used to test learning models. Reinforcement learning, on the other hand, is the field interested in problems where agents learn to behave by experiencing the consequences of their actions under the form of rewards and costs. As a result, these agents are not explicitly taught, they need to learn to maximize cumulative rewards over time by trial-and-error (Sutton & Barto, 2018).

Most of the time, humans are not motivated by external rewards but spontaneously explore their environment to discover and learn about what is around them. This behavior seems to be driven by intrinsic motivations (ims) a set of brain processes that motivate humans to explore for the mere purpose of experiencing novelty, surprise or learning progress (Berlyne, 1966; Gopnik et al., 1999; Kidd & Hayden, 2015; Oudeyer & Smith, 2016; Gottlieb & Oudeyer, 2018).

The integration of ims into artificial agents thus seems to be a key step towards autonomous learning agents (Schmidhuber, 1991c; Kaplan & Oudeyer, 2007).

Recently, we have been observing a convergence of these two fields, forming a new domain that we propose to call developmental reinforcement learning, or more broadly developmental artificial intelligence. Indeed, rl researchers now incorporate fundamental ideas from the developmental robotics literature in their own algorithms, and reversely developmental robotics learning architecture are beginning to benefit from the generalization capabilities of deep rl techniques.