Do foundation models actually reduce our need for real data?
As AI systems grow more powerful, does empirical observation become less necessary? This explores whether foundation models can substitute for ground truth or whether they instead demand stronger empirical anchoring.
The intuitive assumption is that more powerful AI reduces the need for empirical data — the model "knows" enough to substitute for observation. The Foundation Priors paper argues the opposite: foundation models heighten the need for empirical data because they introduce a new source of structured subjectivity that must be disciplined.
Real data serves as the anchor that prevents the foundation prior from becoming self-confirming. The iterative prompt engineering process — propose query, evaluate output, refine prompt, repeat — converges toward the user's anticipated distribution. Without empirical anchoring, this convergence is epistemic circularity: the user refines until the output matches their beliefs, then treats the match as evidence that their beliefs are correct.
With anchoring, however, foundation priors can serve as "an efficient and transparent way to inject domain knowledge, structure high-dimensional spaces, or help navigate problems where real data are scarce." The key is the trust parameter λ: when calibrated conservatively and tempered by real observations, synthetic data becomes useful prior information. When λ is implicitly set to 1 (full trust, no anchoring), synthetic data becomes a substitute for evidence.
This has direct implications for the Tokenization framework. The exchange value of AI output (its appearance as knowledge) is what makes it tempting to treat as evidence. The use value (whether it actually works under its claims) can only be verified through empirical anchoring. The Foundation Priors paper formalizes what the Tokenization thesis describes: the gap between exchange value and use value in AI outputs must be closed through external validation, not through more prompting.
Source: Context Engineering Paper: Foundation Priors
Related concepts in this collection
-
Should we treat LLM outputs as real empirical data?
Can synthetic text generated by language models serve as evidence in the same way observations from the world do? This matters because researchers increasingly rely on AI-generated content without accounting for its fundamentally different epistemic status.
the parent framework for this anchoring argument
-
How much does the user shape what a model generates?
Prompt engineering is often framed as unlocking hidden capabilities, but what if users are actually imposing their own expectations onto model output? This explores whether refinement is discovery or confirmation.
describes the process that requires anchoring
-
Does iterative prompt engineering undermine scientific validity?
When researchers repeatedly adjust prompts to get desired outputs, does this practice introduce hidden bias and produce unreplicable results? The question matters because LLM-based research is proliferating without clear methodological safeguards.
epistemic circularity is the formal version of self-fulfilling prophecy
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
foundation models heighten the need for empirical data rather than eliminating it — without real-data anchoring the iterative prompt process risks epistemic circularity