Design & LLM Interaction LLM Reasoning and Architecture

How much does the user shape what a model generates?

Prompt engineering is often framed as unlocking hidden capabilities, but what if users are actually imposing their own expectations onto model output? This explores whether refinement is discovery or confirmation.

Note · 2026-04-19 · sourced from Context Engineering
How well do language models understand their own knowledge? How do you build domain expertise into general AI models?

The Foundation Priors paper models prompt engineering not as instruction-giving but as an iterative alignment process. The user proposes a query to the foundation model, evaluates the resulting synthetic data against their anticipated distribution (using a divergence measure), and refines the prompt until the synthetic data aligns sufficiently with those priors. The end product captures both the foundation model's learned patterns and the user's subjective filters.

This reframes what prompt engineering actually does. The standard view treats it as unlocking model capability — finding the right key for the right lock. The Foundation Priors view treats it as imposing user subjectivity onto model output — the user is not discovering what the model knows but shaping what the model produces to match what the user already expects. The "skill" of prompt engineering is partly the skill of iteratively refining until output confirms prior expectations.

The epistemic danger is clear: without external anchoring in real data, this process produces epistemic circularity. The user refines prompts until the output looks right, where "looks right" means "matches what I already believe." The model becomes a mirror that reflects the user's anticipated distribution back to them with the authority of a computational system. Since How should users control systems with unpredictable outputs?, the unpredictability of generation creates an illusion of independent inquiry — the output varies enough to feel like discovery rather than confirmation, but the prompt refinement process systematically steers toward the user's priors.


Source: Context Engineering Paper: Foundation Priors

Related concepts in this collection

Concept map
12 direct connections · 119 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

prompt engineering is an iterative alignment process where users inject their own anticipated distributions into generation — the user's priors shape the output as much as the model's training