Extrapolation by Association: Length Generalization Transfer in Transformers
Transformer language models have demonstrated impressive generalization capabilities in natural language domains, yet we lack a fine-grained understanding of how such generalization arises. In this paper, we investigate length generalization— the ability to extrapolate from shorter to longer inputs—through the lens of task association. We find that length generalization can be transferred across related tasks. That is, training a model with a longer and related auxiliary task can lead it to generalize to unseen and longer inputs from some other target task. We demonstrate this length generalization transfer across diverse algorithmic tasks, including arithmetic operations, string transformations, and maze navigation. Our results show that transformer models can inherit generalization capabilities from similar tasks when trained jointly. Moreover, we observe similar transfer effects in pretrained language models, suggesting that pretraining equips models with reusable computational scaffolding that facilitates extrapolation in downstream settings. Finally, we provide initial mechanistic evidence that length generalization transfer correlates with the re-use of the same attention heads between the tasks. Together, our findings deepen our understanding of how transformers generalize to out-of-distribution inputs and highlight the compositional reuse of inductive structure across tasks.
A central theme of transformer language models is their ability to generalize. By scaling up data and model size, large language models develop emergent abilities that exceed expectations [Wei et al., 2022]. They can also transfer knowledge across domains and tasks [OpenAI, 2024, Brown et al., 2020, Sanh et al., 2022]. While it is widely believed that language models are not simply parroting or memorizing their training data, we still lack a fine-grained understanding of how language models apply skills learned during training to potentially unseen problems.
The out-of-distribution (OOD) generalization capabilities of language models have garnered much attention in the literature [Anil et al., 2022, Zhang et al., 2024, Yang et al., 2024]. In this work, we study a canonical example of OOD generalization, length generalization, which is the ability to generalize from shorter to longer inputs [Zhou et al., 2023]. There is a long line of work focusing on improving length generalization of arithmetic tasks in transformers, which has spurred innovations in positional encoding schemes and transformer architecture [Cho et al., 2024, McLeish et al., 2024]. Closely related is the concept of compositional generalization, where the model combines previously learned skills to solve new problems [Yang et al., 2024, Xu et al., 2024].
In this work, we study a new mechanism underlying length generalization: extrapolation by association. We hypothesize that, when faced with a problem outside its training distribution, language models can use related skills to solve it. Specifically, we ask: Can generalization to longer inputs in one task transfer to another task that is only trained on short examples?
To showcase the length generalization transfer capabilities in transformers, we choose three distinct groups of synthetic tasks. The tasks in each group are related such that they represent similar algorithmic procedures. Within each group, we train multiple tasks together, and crucially, we train an “auxiliary task” at a longer length and a “main task” at a shorter length. Using this setup, we observe that the shorter main task generalizes to the length of the longer auxiliary task when trained together. See Figure 2 for the tasks and respective lengths used in each experiment.