Can parallel architectures solve fundamentally sequential problems?
Explores whether pure parallel computation—like Transformers—can tackle problems requiring long chains of dependent reasoning, or if serial depth is theoretically necessary for certain classes of problems.
The distinction between problems that are "wide" (parallelizable) and "deep" (inherently serial) is fundamental to computational complexity theory but underappreciated in machine learning. The Serial Scaling Hypothesis formalizes this: for many important ML problems — especially those involving complex reasoning, planning, or evolution of interacting systems — increasing parallel computation alone is insufficient. Progress requires scaling serial computation.
The complexity-theoretic argument is precise. Transformers and state-space models, despite processing "sequences," are parallelizable computations. Unless used with autoregressive inference (one token at a time), a Transformer layer ingests all N tokens concurrently. This places them in computational complexity classes like AC0 or TC0, preventing them from solving problems requiring polynomial time.
The most striking finding: diffusion models cannot solve inherently serial problems. Despite appearing non-parallelizable due to iterative denoising, diffusion models with a fixed-depth TC0 backbone remain in TC0 even with infinite diffusion steps. The iterative nature of diffusion is a red herring — the computational depth per step is what matters, and that is bounded.
Consider Sudoku as a parable. Easy puzzles can be solved by filling blanks independently — parallel. Hard puzzles require long chains of dependent reasoning where each blank depends on others — serial. No algorithm can shortcut this dependency chain. The same distinction applies to mathematical reasoning, physical simulation, and sequential decision-making.
The implications cut three ways:
- Model design: Future architectures need recurrent structures to increase serial computation, not just wider parallel ones. But adding recurrence introduces trainability concerns — gradient variance increases and Lipschitz constants grow, making training harder.
- Hardware: The field may need faster, lower-latency processors, not just more parallel ones.
- Evaluation: Serial compute should be measured and reported separately from total compute.
This provides the complexity-theoretic foundation for why Can models reason without generating visible thinking tokens? matters — recurrent depth-scaling is not just an efficiency trick but a necessity for inherently serial problems that parallel architectures provably cannot solve.
Source: Novel Architectures
Related concepts in this collection
-
How should we balance parallel versus sequential compute at test time?
Test-time compute can prioritize breadth (trying many approaches) or depth (refining one approach). Which strategy works better, and does the answer depend on the problem?
the serial hypothesis provides the theoretical grounding for when sequential is not just preferred but required
-
Why does parallel reasoning outperform single chain thinking?
Does dividing a fixed token budget across multiple independent reasoning paths beat spending it all on one long chain? This explores how breadth and diversity in reasoning compare to depth.
qualified: parallel wins only on parallelizable problems; for inherently serial problems, parallel scaling is provably insufficient
-
Can models reason without generating visible thinking tokens?
Explores whether intermediate reasoning must be verbalized as text tokens, or if models can think in hidden continuous space. Challenges a foundational assumption about how language models scale their reasoning capabilities.
recurrent latent reasoning is the architecture class that can scale serial computation
-
Can models reason without generating visible thinking steps?
Do machine reasoning systems actually require verbalized chains of thought, or can they solve complex problems through hidden computation? This challenges how we measure and understand reasoning.
latent recurrence may be the path to serial depth without the quadratic cost of verbalized CoT
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
the serial scaling hypothesis — some problems are fundamentally sequential and parallel architectures cannot solve them