Looking beyond the next token

Paper · arXiv 2504.11336 · Published April 15, 2025
LLM ArchitectureSelf Refinement Self Consistency FeedbackNovel Architectures

The structure of causal language model training assumes that each token can be accurately predicted from the previous context. This contrasts with humans’ natural writing and reasoning process, where goals are typically known before the exact argument or phrasings. While this mismatch has been well studied in the literature, the working assumption has been that architectural changes are needed to address this mismatch. We argue that rearranging and processing the training data sequences can allow models to more accurately imitate the true data-generating process, and does not require any other changes to the architecture or training infrastructure. We demonstrate that this technique TRELAWNEY and the inference algorithms derived from it allow us to improve performance on several key benchmarks that span planning, algorithmic reasoning, and story generation tasks. Finally, our method naturally enables the generation of long-term goals at no additional cost. We investigate how using the model’s goal generation capability can further improve planning and reasoning. Additionally, we believe TRELAWNEY could potentially open doors to new capabilities beyond the current language modeling paradigm.

Next-token prediction (NTP) is the primary objective for training sequence models. This objective involves a technique called teacher forcing (Williams & Zipser, 1989), where the model’s predicted output at each step is replaced with the ground truth from the real dataset. One of teacher forcing’s benefits is that it accelerates the training by providing the model with the correct previous output, so the learning does not suffer from error accumulation, and the gradient update is more stable. Another crucial benefit is that it enables parallelism and hardware acceleration in training because the model can simultaneously process all time steps, rather than sequentially waiting for its own predictions. However, Bachmann & Nagarajan (2024) argue that models trained with teacher forcing often fail to learn long-range dependencies, latching onto local patterns and surface-level correlations instead.

Several recent methods have been proposed to alleviate the issues of teacher forcing. One popular approach is multi-token prediction, where the model learns to predict multiple tokens at the same time (Bachmann & Nagarajan, 2024; Gloeckle et al., 2024; Deepseek et al., 2024). Another family of approaches involves modifying the training objective to predict both the next token for a prefix and the previous token for a suffix by modifying the model architecture (Hu et al., 2025). Most of these approaches either involve nontrivial modification to the model architecture or make the learning process much harder by forcing the model to predict multiple tokens at the same time.

In this work, we investigate a data-centric approach to address these limitations. In contrast to the strictly sequential nature of traditional training, the flow of information in real-world tasks is highly non-linear. Instead of modifying the model architecture, our method TRELAWNEY modifies the training data by introducing alternative factorizations that embed inductive biases directly. Concretely, we augment the training corpus by interleaving it with special lookahead tokens — and — that encapsulate future information (see Figure 1). The exact placement and content of these tokens can be determined either randomly or with task-specific knowledge. We hypothesize that this augmentation makes learning the long-term dependencies easier and imbues the model with the capacity to plan ahead. Furthermore, these modified training data naturally teach the model to guide the generation towards the future information, so the lookahead tokens can also let users exert fine-grained control over the long-term generation.