Can neural memory modules scale language models beyond attention limits?
Can separating short-term attention from adaptive long-term memory allow models to efficiently handle context windows exceeding 2M tokens while maintaining competitive performance?
Titans (2501.00663) introduces a neural long-term memory module that addresses a fundamental contradiction in linear recurrent models: they are designed for efficiency on long contexts, but long contexts cannot be properly compressed into small fixed-size states.
The architectural insight is that attention and memory serve fundamentally different functions. Attention operates as short-term memory — accurate direct dependency modeling within the current context window, but quadratic cost limits its reach. Neural memory operates as long-term memory — compressed and persistent, memorizing data that is surprising or close to surprising tokens. The memory update mechanism considers the proportion of memory size to data surprise, resulting in adaptive memory management.
Three integration variants are proposed: memory as context (attending to memory alongside current context), memory as gating (memory modulates attention output), and memory as a layer (memory replaces some attention layers). Each variant trades off between integration depth and computational overhead.
The results establish that Titans outperform both standard Transformers (with the same context window) and modern linear recurrent models across language modeling, common-sense reasoning, genomics, and time series. Critically, Titans scale to context windows larger than 2M tokens while showing competitive performance with Transformers that use the entire context — the long-context problem is addressed without the quadratic penalty. The persistent nature of the memory module makes it a natural substrate for Can models precompute answers before users ask questions? — the memory can store precomputed inferences between interactions, and sleep-time processing can populate the memory with anticipated query-relevant information.
Since Can models reason without generating visible thinking tokens?, the Titans architecture offers a complementary path: rather than scaling reasoning depth through recurrent computation, it scales memory breadth through adaptive memorization. Both bypass the limitations of standard attention but along different architectural dimensions.
Miras unifying framework (2504.13173): The "It's All Connected" paper reconceptualizes Transformers, Titans, and modern linear recurrent models as associative memory modules that learn a mapping of keys to values using an internal objective — termed "attentional bias." The paper observes that most existing sequence models use either dot-product similarity or ℓ2 regression objectives as their attentional bias. Miras provides a general framework with four design choices: (i) associative memory architecture, (ii) attentional bias objective, (iii) retention gate, and (iv) memory learning algorithm. Forgetting mechanisms are reinterpreted as retention regularization — providing a principled basis for forget gates across architectures. Three novel sequence models — Moneta, Yaad, and Memora — go beyond existing linear RNNs while maintaining fast parallelizable training. Different Miras configurations yield models with varying strengths: some excel at language modeling, others at commonsense reasoning or recall-intensive tasks. This generalizes the Titans insight: the attention-as-short-term/memory-as-long-term distinction is one instance of a broader design space where attentional bias objective and retention mechanism can be independently varied.
Source: LLM Architecture; enriched from Memory
Related concepts in this collection
-
Can models reason without generating visible thinking tokens?
Explores whether intermediate reasoning must be verbalized as text tokens, or if models can think in hidden continuous space. Challenges a foundational assumption about how language models scale their reasoning capabilities.
complementary architectural innovation: depth-recurrent reasoning vs. breadth-adaptive memory
-
Can models reason without generating visible thinking steps?
Do machine reasoning systems actually require verbalized chains of thought, or can they solve complex problems through hidden computation? This challenges how we measure and understand reasoning.
alternative to verbalized reasoning
-
Can models precompute answers before users ask questions?
Most LLM applications maintain persistent state across interactions. Could models use idle time between queries to precompute useful inferences about that context, reducing latency when users actually ask?
Titans' persistent neural memory is a natural substrate for sleep-time compute: the memory module can store precomputed inferences between interactions, and sleep-time processing can update the memory with anticipated query patterns; both exploit statefulness to reduce per-query cost
-
Can latent thought vectors scale language models beyond parameters?
Explores whether explicit latent thought vectors with dual-rate learning create new scaling dimensions independent of model size. This matters because it suggests alternatives to simply building larger models.
LTMs implement fast-slow dynamics at the generation level (dual-rate learning of thought vs token vectors) complementing Titans' fast-slow at the memory level (attention as short-term, neural memory as long-term)
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
neural memory modules that adaptively memorize surprising tokens complement attention as long-term vs short-term memory — scaling to 2M+ context