LLM Reasoning and Architecture

Can neural memory modules scale language models beyond attention limits?

Can separating short-term attention from adaptive long-term memory allow models to efficiently handle context windows exceeding 2M tokens while maintaining competitive performance?

Note · 2026-02-22 · sourced from LLM Architecture
How should we allocate compute budget at inference time? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Titans (2501.00663) introduces a neural long-term memory module that addresses a fundamental contradiction in linear recurrent models: they are designed for efficiency on long contexts, but long contexts cannot be properly compressed into small fixed-size states.

The architectural insight is that attention and memory serve fundamentally different functions. Attention operates as short-term memory — accurate direct dependency modeling within the current context window, but quadratic cost limits its reach. Neural memory operates as long-term memory — compressed and persistent, memorizing data that is surprising or close to surprising tokens. The memory update mechanism considers the proportion of memory size to data surprise, resulting in adaptive memory management.

Three integration variants are proposed: memory as context (attending to memory alongside current context), memory as gating (memory modulates attention output), and memory as a layer (memory replaces some attention layers). Each variant trades off between integration depth and computational overhead.

The results establish that Titans outperform both standard Transformers (with the same context window) and modern linear recurrent models across language modeling, common-sense reasoning, genomics, and time series. Critically, Titans scale to context windows larger than 2M tokens while showing competitive performance with Transformers that use the entire context — the long-context problem is addressed without the quadratic penalty. The persistent nature of the memory module makes it a natural substrate for Can models precompute answers before users ask questions? — the memory can store precomputed inferences between interactions, and sleep-time processing can populate the memory with anticipated query-relevant information.

Since Can models reason without generating visible thinking tokens?, the Titans architecture offers a complementary path: rather than scaling reasoning depth through recurrent computation, it scales memory breadth through adaptive memorization. Both bypass the limitations of standard attention but along different architectural dimensions.

Miras unifying framework (2504.13173): The "It's All Connected" paper reconceptualizes Transformers, Titans, and modern linear recurrent models as associative memory modules that learn a mapping of keys to values using an internal objective — termed "attentional bias." The paper observes that most existing sequence models use either dot-product similarity or ℓ2 regression objectives as their attentional bias. Miras provides a general framework with four design choices: (i) associative memory architecture, (ii) attentional bias objective, (iii) retention gate, and (iv) memory learning algorithm. Forgetting mechanisms are reinterpreted as retention regularization — providing a principled basis for forget gates across architectures. Three novel sequence models — Moneta, Yaad, and Memora — go beyond existing linear RNNs while maintaining fast parallelizable training. Different Miras configurations yield models with varying strengths: some excel at language modeling, others at commonsense reasoning or recall-intensive tasks. This generalizes the Titans insight: the attention-as-short-term/memory-as-long-term distinction is one instance of a broader design space where attentional bias objective and retention mechanism can be independently varied.


Source: LLM Architecture; enriched from Memory

Related concepts in this collection

Concept map
12 direct connections · 115 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

neural memory modules that adaptively memorize surprising tokens complement attention as long-term vs short-term memory — scaling to 2M+ context