LLM Reasoning and Architecture Language Understanding and Pragmatics

Why do language models need so much more text than humans?

Language models train on the surface of written text, but humans learn by inferring the underlying thoughts behind what they read. Does this explain why models need vastly more data to reach human-level understanding?

Note · 2026-05-03 · sourced from Data

Human-written text is the culmination of an underlying thought process — when we write, there is often an internal dialogue that clarifies or determines the written word. The published text is a compressed artifact of this process. Modern language models are pretrained directly on this compressed result and require a large portion of the entire human-written web to learn what humans learn from a much smaller volume. This is data inefficiency, and as compute growth outpaces web growth, we may soon face a data-constrained regime where this inefficiency becomes binding.

The proposed cause: humans do not learn from the compressed surface alone. When a human reads a research paper, they analyze specific claims, integrate them with prior knowledge, and attempt to "decompress" the author's original thought process. Reasoning serves learning — the reader infers the internal dialogue undergirding the observed text and learns from that decompressed version. LMs trained directly on the surface cannot benefit from this decompression because no decompressed signal exists in the data — the same gap Can reconstructing expert thinking improve reasoning transfer? addresses through reconstructed expert thoughts.

The proposed remedy frames language modeling as a latent variable problem: observed data X depends on underlying latent thoughts Z, and a model learns from the joint distribution p(Z, X) rather than p(X) alone. The latent thought generator q(Z|X) becomes a synthetic data generation problem — and crucially, the LM itself can serve as the generator because its reasoning and theory-of-mind capabilities provide a strong prior for plausible latent thoughts. This means weights can be shared between the LM and the latent thought generator, simplifying training to a small modification of standard pretraining.

The Bootstrapping Latent Thoughts (BoLT) procedure uses an EM-style iteration where the E-step is a Monte-Carlo estimator that approaches the true posterior as the number of samples grows. Empirically, BoLT improves data efficiency over at least three iterations and benefits from at least four samples per E-step. Inference compute therefore becomes a knob for scaling pretraining data efficiency — a redirection of compute from training-time to inference-time bootstrapping.


Source: Data

Related concepts in this collection

Concept map
17 direct connections · 168 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the data efficiency gap between humans and language models stems from learning from compressed text without decompressing the underlying thoughts