Beyond Turing: Memory-Amortized Inference as a Foundation for Cognitive Computation

Paper · arXiv 2508.14143 · Published August 19, 2025
Novel Architectures

Abstract—Intelligence is fundamentally non-ergodic: it emerges not from uniform sampling or optimization from scratch, but from the structured reuse of prior inference trajectories. We introduce Memory-Amortized Inference (MAI) as a formal framework in which cognition is modeled as inference over latent cycles in memory, rather than recomputation through gradient descent. MAI systems encode inductive biases via structural reuse, minimizing entropy and enabling context-aware, structure-preserving inference. This approach reframes cognitive systems not as ergodic samplers, but as navigators over constrained latent manifolds, guided by persistent topological memory. Through the lens of delta-homology, we show that MAI provides a principled foundation for Mountcastle’s Universal Cortical Algorithm, modeling each cortical column as a local inference operator over cycle-consistent memory states. Furthermore, we establish a time-reversal duality between MAI and reinforcement learning: whereas RL propagates value forward from reward, MAI reconstructs latent causes backward from memory. This inversion paves a path toward energy-efficient inference and addresses the computational bottlenecks facing modern AI. MAI thus offers a unified, biologically grounded theory of intelligence based on structure, reuse, and memory.

In this paper, we propose that Memory-Amortized Inference (MAI) captures this core asymmetry. We argue that intelligence should be modeled not as stochastic wandering, but as structure-preserving, memory-amortized inference over topologically constrained latent manifolds [7]. In MAI, inference does not begin anew at each time point; rather, it proceeds via the reuse of structured memory cycles over latent manifolds. These memory trajectories define topologically stable, entropy-minimizing paths through representational space, enabling systems to perform context-aware, structure-preserving inference with dramatically reduced computational cost. This path-dependent behavior, grounded in memory, context, and prediction, marks a sharp departure from traditional ergodic models of computation and learning (e.g., reinforcement learning [8], Bayesian learning [9], stochastic optimization [10], variational inference [11]).

Finally, we demonstrate that MAI offers a theoretical bridge between memory and decision-making by reframing Reinforcement Learning (RL) [8] as a time-forward counterpart to MAI. Where RL bootstraps value over futures, MAI reconstructs latent pasts from memory structures; both rely on partial, structure-aware updates to minimize uncertainty. This duality allows MAI to invert the reward-driven flow of RL, replacing energy-intensive iteration with structure-aware reuse [2]. In doing so, MAI addresses the energy bottleneck of modern AI, offering a path toward biologically plausible, energy-efficient artificial general intelligence (AGI) grounded in memory, not brute-force computation [17].