Reinforcement Learning for LLMs LLM Reasoning and Architecture

Do prior errors in context history amplify future errors?

When a language model makes mistakes early in a task, do those errors contaminate subsequent predictions? We explore whether error accumulation degrades long-horizon performance through passive context pollution rather than capability limits.

Note · 2026-02-22 · sourced from Reasoning Critiques
How should we allocate compute budget at inference time?

A model executing a long-horizon task makes errors. Those errors remain in the context. The model then predicts the next token conditioned on a history that contains its own mistakes. Error probability increases. More errors accumulate. Performance degrades faster than a constant per-step error rate would predict.

This self-conditioning effect is empirically verified by controlling the error rate in the history shown to the model. As the error rate in prior context increases, subsequent step accuracy drops sharply. The mechanism is straightforward: models are trained to predict the most likely next token given context; when the context contains errors, those errors become part of the distribution being continued.

Unlike humans — who typically improve at a task with repetition — LLMs become less reliable as their context fills with their own mistakes. Practice does not help; contamination does.

Three practical implications:

  1. Model scaling does not fix this — larger models self-condition just as much as smaller ones. The problem is not capability but the conditional prediction objective itself.

  2. Long-horizon failure attribution matters — what looks like a reasoning or planning failure in long tasks is often an execution failure caused by error accumulation. The model had the capability; its own prior outputs degraded it.

  3. Thinking models fix self-conditioning — thinking models (like R1) are not affected by prior mistakes in the same way; sequential test-time compute greatly improves the length of task a model can complete (DeepSeek-V3 fails at 2 steps; R1 executes 200). The thinking process appears to insulate reasoning from error-contaminated context.

This is distinct from Does self-revision actually improve reasoning in language models?. Self-revision is a model's deliberate re-examination of its own reasoning, which introduces errors. Self-conditioning is a passive contamination mechanism — no deliberate revision required, just the accumulation of prior errors in context.


Source: Reasoning Critiques

Related concepts in this collection

Concept map
19 direct connections · 194 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

self-conditioning effect — prior errors in context history amplify future error rates in long-horizon tasks