Why do language models need so much more text than humans?
Language models train on the surface of written text, but humans learn by inferring the underlying thoughts behind what they read. Does this explain why models need vastly more data to reach human-level understanding?
Human-written text is the culmination of an underlying thought process — when we write, there is often an internal dialogue that clarifies or determines the written word. The published text is a compressed artifact of this process. Modern language models are pretrained directly on this compressed result and require a large portion of the entire human-written web to learn what humans learn from a much smaller volume. This is data inefficiency, and as compute growth outpaces web growth, we may soon face a data-constrained regime where this inefficiency becomes binding.
The proposed cause: humans do not learn from the compressed surface alone. When a human reads a research paper, they analyze specific claims, integrate them with prior knowledge, and attempt to "decompress" the author's original thought process. Reasoning serves learning — the reader infers the internal dialogue undergirding the observed text and learns from that decompressed version. LMs trained directly on the surface cannot benefit from this decompression because no decompressed signal exists in the data — the same gap Can reconstructing expert thinking improve reasoning transfer? addresses through reconstructed expert thoughts.
The proposed remedy frames language modeling as a latent variable problem: observed data X depends on underlying latent thoughts Z, and a model learns from the joint distribution p(Z, X) rather than p(X) alone. The latent thought generator q(Z|X) becomes a synthetic data generation problem — and crucially, the LM itself can serve as the generator because its reasoning and theory-of-mind capabilities provide a strong prior for plausible latent thoughts. This means weights can be shared between the LM and the latent thought generator, simplifying training to a small modification of standard pretraining.
The Bootstrapping Latent Thoughts (BoLT) procedure uses an EM-style iteration where the E-step is a Monte-Carlo estimator that approaches the true posterior as the number of samples grows. Empirically, BoLT improves data efficiency over at least three iterations and benefits from at least four samples per E-step. Inference compute therefore becomes a knob for scaling pretraining data efficiency — a redirection of compute from training-time to inference-time bootstrapping.
Source: Data
Related concepts in this collection
-
Can reconstructing expert thinking improve reasoning transfer?
Expert texts show only the final result of complex thinking. Can we reverse-engineer those hidden thought processes and use them to train models that reason better across different domains?
extends: companion piece — same surface-vs-process diagnosis at the data layer; Reasoning CPT and BoLT are convergent solutions
-
Can text-trained models compress images better than specialized tools?
Do general-purpose language models trained only on text outperform domain-specific compressors like PNG and FLAC on their native data? This tests whether compression ability is universal or requires domain specialization.
tension: LM-as-compressor framing implies surface-only training is sufficient if compression is the goal; this note argues decompression of hidden thought is what humans learn from
-
Can chain-of-thought reasoning emerge during pretraining itself?
Does treating reasoning as an exploratory action within the pretraining phase, rather than post-training, allow models to develop stronger reasoning capabilities earlier? This matters because it could reshape when and how we train reasoning into language models.
complements: RLP also adds reasoning signal at pretraining via information-gain reward — different mechanism, same target
-
Can pretraining corpora themselves provide verifiable RL rewards?
Does framing next-token prediction as a reasoning task with ground-truth verification eliminate the need for human feedback or domain-specific rewards during language model pretraining?
complements: RPT and BoLT both convert pretraining into a reasoning-aware procedure — RL signal vs latent-variable EM
-
Can training data itself teach harder reasoning steps?
Can augmenting pretraining data with generated reasoning trajectories help models learn complex multi-step reasoning more efficiently? This explores whether intermediate explanations in training data unlock capabilities standard next-token prediction misses.
exemplifies: same data-efficiency gain mechanism — TPT applies TTS at training; BoLT applies EM-style decompression
-
Does AI text generation unfold through temporal reflection?
Explores whether the sequential ordering of tokens in LLM generation constitutes genuine temporal thought or merely probabilistic computation without reflective duration.
extends: Adrian's atemporal-AI critique — written text compresses temporal thinking; LMs trained on surface lose the temporal dimension that produced it
-
Does AI actually commodify expertise or tokenize it?
The standard framing treats AI output like mass-produced commodities, but does AI's contextual, mutable nature fit better with token economics than commodity theory?
connects: tokenized intelligence rests on what the tokens compress; BoLT names the lost compression and tries to re-add it
-
Do base models already contain hidden reasoning ability?
Explores whether reasoning capability emerges during pre-training as a latent feature rather than being created by post-training methods like reinforcement learning or fine-tuning.
complements: base capability is already there; BoLT explains why the latent capability and pretraining objective do not naturally align — surface-only training is the misalignment
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
the data efficiency gap between humans and language models stems from learning from compressed text without decompressing the underlying thoughts