How much does the user shape what a model generates?
Prompt engineering is often framed as unlocking hidden capabilities, but what if users are actually imposing their own expectations onto model output? This explores whether refinement is discovery or confirmation.
The Foundation Priors paper models prompt engineering not as instruction-giving but as an iterative alignment process. The user proposes a query to the foundation model, evaluates the resulting synthetic data against their anticipated distribution (using a divergence measure), and refines the prompt until the synthetic data aligns sufficiently with those priors. The end product captures both the foundation model's learned patterns and the user's subjective filters.
This reframes what prompt engineering actually does. The standard view treats it as unlocking model capability — finding the right key for the right lock. The Foundation Priors view treats it as imposing user subjectivity onto model output — the user is not discovering what the model knows but shaping what the model produces to match what the user already expects. The "skill" of prompt engineering is partly the skill of iteratively refining until output confirms prior expectations.
The epistemic danger is clear: without external anchoring in real data, this process produces epistemic circularity. The user refines prompts until the output looks right, where "looks right" means "matches what I already believe." The model becomes a mirror that reflects the user's anticipated distribution back to them with the authority of a computational system. Since How should users control systems with unpredictable outputs?, the unpredictability of generation creates an illusion of independent inquiry — the output varies enough to feel like discovery rather than confirmation, but the prompt refinement process systematically steers toward the user's priors.
Source: Context Engineering Paper: Foundation Priors
Related concepts in this collection
-
How should users control systems with unpredictable outputs?
When generative AI produces different outputs from identical inputs, how do interaction design principles help users maintain control and develop effective mental models for stochastic systems?
variability creates the illusion that prompt-refined output is discovery rather than confirmation
-
Can prompt optimization teach models knowledge they lack?
Explores whether sophisticated prompting techniques can inject new domain knowledge into language models, or if they're limited to activating existing training knowledge.
complementary constraint: the user cannot inject new priors either, only select among the model's existing patterns
-
Should we treat LLM outputs as real empirical data?
Can synthetic text generated by language models serve as evidence in the same way observations from the world do? This matters because researchers increasingly rely on AI-generated content without accounting for its fundamentally different epistemic status.
the parent framework
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
prompt engineering is an iterative alignment process where users inject their own anticipated distributions into generation — the user's priors shape the output as much as the model's training