Design & LLM Interaction Language Understanding and Pragmatics Psychology and Social Cognition

Do popular prompting techniques actually improve model performance?

Five widely-cited prompting methods (chain-of-thought, emotion prompting, sandbagging, and others) are tested across multiple models and benchmarks to see if their reported improvements hold up under rigorous statistical analysis.

Note · 2026-03-28 · sourced from Evaluations
How do you build domain expertise into general AI models? How do reasoning models actually fail under pressure?

A systematic replication attempt (2024) tests five prominent prompting techniques — zero-shot chain-of-thought, ExpertPrompting, EmotionPrompting, Sandbagging, and Re-Reading — across six models (GPT-3.5, GPT-4o, Gemini 1.5 Pro, Claude 3 Opus, Llama 3-8B, Llama 3-70B) on manually double-checked subsets of reasoning benchmarks (CommonsenseQA, CRT, NumGLUE, ScienceQA, StrategyQA).

The result: "a general lack of statistically significant differences across nearly all techniques tested."

The authors draw an explicit parallel to psychology's replication crisis, arguing that "machine behavior" research — treating LLMs as black boxes and studying input-output correlations — suffers from the same methodological weaknesses: small sample sizes, poorly designed experiments, publication bias, lack of transparency, low statistical power, selective reporting, and preferences for novelty.

This compounds the concern raised by Does iterative prompt engineering undermine scientific validity?: not only is individual prompt engineering ad-hoc, but even published prompting techniques may not produce the claimed effects under rigorous testing. The field may be building on unreplicated findings.

The parallel to human psychology is precise. Psychology discovered its replication crisis when large-scale replication projects found that many canonical effects were smaller, absent, or non-robust. Machine behavior research has not yet had its "replication project" moment — but this study suggests the problem already exists. Since Does model confidence predict robustness to prompt changes?, the non-replication may reflect that prompting effects are real only in low-confidence regions where the model is genuinely uncertain, and vanish in high-confidence regions where the model already "knows the answer."

The practical implication: prompting technique papers that report accuracy improvements without statistical significance testing, multiple model evaluation, and controlled baselines should be treated as preliminary findings, not established methods.


Source: Evaluations

Related concepts in this collection

Concept map
14 direct connections · 126 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

prompting technique improvements do not replicate under controlled statistical testing — machine behavior research faces a looming replication crisis