Does prompt optimization without inference strategy fail?
Standard practice optimizes prompts and inference strategies separately. But do prompts optimized for single-shot evaluation actually perform worse when deployed at scale with aggregation methods like majority voting?
The standard practice treats prompt optimization and inference scaling as independent. Optimize the prompt first (via reward-based search, instruction tuning, etc.), then separately decide the inference strategy (best-of-N sampling, majority voting, etc.). IAPO demonstrates this decoupling is a methodological error with measurable cost.
The mechanism: different prompts generate responses with different distributional properties. Some prompts produce outputs that are individually strong but don't benefit from aggregation — their variance is low, so generating N samples and voting adds compute without improving quality. Other prompts produce outputs with higher variance but better centering — individually weaker, but under majority voting or best-of-N with a reward model, the aggregation exploits the variance to select high-quality responses. A prompt optimized at N=1 will favor the first type. But if the deployment uses N=8 with majority voting, the second type is strictly better.
This creates "deceiving prompts" — prompts that appear optimal in single-shot evaluation but become suboptimal (or harmful) under inference scaling. The PSST algorithm addresses this by treating prompt selection and inference scale as a joint contextual best-arm identification problem, exploring prompt-inference configurations together rather than sequentially.
The empirical results across six tasks: IAPO outperforms disjoint optimization by up to 25% and prompt-only optimization by up to 50%. The gains are consistent across mathematical reasoning, commonsense reasoning, and multi-objective text generation.
The practical implication for inference system design: any pipeline that separately optimizes prompts and inference strategies is leaving significant performance on the table. Since Can we allocate inference compute based on prompt difficulty?, the IAPO finding adds a second dimension — not just how much inference compute per prompt, but which prompt given the inference strategy. The two must be co-optimized.
Source: Inference time scaling
Related concepts in this collection
-
Can we allocate inference compute based on prompt difficulty?
Does adjusting how much compute each prompt receives—rather than using a fixed budget—improve model performance? Could smarter allocation let smaller models compete with larger ones?
extends: budget allocation is necessary but not sufficient; the prompt itself must be co-optimized with inference strategy
-
Why does parallel reasoning outperform single chain thinking?
Does dividing a fixed token budget across multiple independent reasoning paths beat spending it all on one long chain? This explores how breadth and diversity in reasoning compare to depth.
adds: which prompts benefit from parallel scaling depends on prompt-inference interaction, not just task structure
-
Can prompt optimization teach models knowledge they lack?
Explores whether sophisticated prompting techniques can inject new domain knowledge into language models, or if they're limited to activating existing training knowledge.
both constrain prompt optimization: IAPO from the inference-coupling side, prompt-optimization-limits from the knowledge side
-
Can semantic knowledge shift model behavior like reinforcement learning does?
Can textual descriptions of successful reasoning patterns, prepended as context, achieve the same distribution shifts that RL achieves through parameter updates? This matters because it could eliminate the need for expensive fine-tuning on limited data.
Training-Free GRPO is a concrete case where IAPO's co-optimization warning applies: experiential knowledge prepended as a token prior is effectively automated prompt optimization guided by GRPO logic, and its effectiveness should depend on whether the distilled knowledge is optimized jointly with the downstream inference strategy or in isolation
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
prompt optimization decoupled from inference scaling produces systematic misalignment — joint optimization outperforms disjoint by up to 50 percent