Do popular prompting techniques actually improve model performance?
Five widely-cited prompting methods (chain-of-thought, emotion prompting, sandbagging, and others) are tested across multiple models and benchmarks to see if their reported improvements hold up under rigorous statistical analysis.
A systematic replication attempt (2024) tests five prominent prompting techniques — zero-shot chain-of-thought, ExpertPrompting, EmotionPrompting, Sandbagging, and Re-Reading — across six models (GPT-3.5, GPT-4o, Gemini 1.5 Pro, Claude 3 Opus, Llama 3-8B, Llama 3-70B) on manually double-checked subsets of reasoning benchmarks (CommonsenseQA, CRT, NumGLUE, ScienceQA, StrategyQA).
The result: "a general lack of statistically significant differences across nearly all techniques tested."
The authors draw an explicit parallel to psychology's replication crisis, arguing that "machine behavior" research — treating LLMs as black boxes and studying input-output correlations — suffers from the same methodological weaknesses: small sample sizes, poorly designed experiments, publication bias, lack of transparency, low statistical power, selective reporting, and preferences for novelty.
This compounds the concern raised by Does iterative prompt engineering undermine scientific validity?: not only is individual prompt engineering ad-hoc, but even published prompting techniques may not produce the claimed effects under rigorous testing. The field may be building on unreplicated findings.
The parallel to human psychology is precise. Psychology discovered its replication crisis when large-scale replication projects found that many canonical effects were smaller, absent, or non-robust. Machine behavior research has not yet had its "replication project" moment — but this study suggests the problem already exists. Since Does model confidence predict robustness to prompt changes?, the non-replication may reflect that prompting effects are real only in low-confidence regions where the model is genuinely uncertain, and vanish in high-confidence regions where the model already "knows the answer."
The practical implication: prompting technique papers that report accuracy improvements without statistical significance testing, multiple model evaluation, and controlled baselines should be treated as preliminary findings, not established methods.
Source: Evaluations
Related concepts in this collection
-
Does iterative prompt engineering undermine scientific validity?
When researchers repeatedly adjust prompts to get desired outputs, does this practice introduce hidden bias and produce unreplicable results? The question matters because LLM-based research is proliferating without clear methodological safeguards.
individual-level and community-level replication failures compound
-
Does model confidence predict robustness to prompt changes?
Explores whether a model's certainty about its answer determines how much it resists prompt rephrasing and semantic variation. This matters because it could explain why some tasks are harder to evaluate reliably.
non-replication may reflect that effects vanish in high-confidence regions
-
Why do chain-of-thought examples fail across different conditions?
Chain-of-thought exemplars show surprising sensitivity to order, complexity level, diversity, and annotator style. Understanding these brittleness dimensions could reveal what makes reasoning prompts robust or fragile.
brittleness is consistent with non-replicability: effects that depend on specific exemplar properties are inherently fragile
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
prompting technique improvements do not replicate under controlled statistical testing — machine behavior research faces a looming replication crisis