Can emotional phrases in prompts improve language model performance?
This explores whether psychological framing—adding emotionally charged statements to task prompts—activates different knowledge pathways in LLMs than logical optimization alone, and whether the effect comes from emotional valence specifically.
EmotionPrompt designs 11 sentences as emotional stimuli — psychological phrases appended after original task prompts. Example: "This is very important to my career" added at the end of a task prompt. Testing across ChatGPT, Google Bard, and Llama 2 shows consistent performance enhancement from these emotional stimuli.
The mechanism is distinct from logical prompt optimization: emotional stimuli don't restructure the task, provide examples, or add information. They add motivational framing — the textual equivalent of psychological pressure. LLMs trained on human text have absorbed the association between urgency markers and careful, detailed responses.
This extends Can prompt optimization teach models knowledge they lack? — emotional framing activates different knowledge pathways than logical framing. A task presented as "important to my career" may activate different attention patterns or generation strategies than the same task without that framing, even though the informational content is identical.
Positive words ("confidence", "sure", "success", "achievement") contribute disproportionately — over 50% of the performance improvement on four tasks, approaching 70% on two. This suggests the mechanism is specifically tied to positive emotional valence rather than general emotional arousal.
The finding is both useful and unsettling. Useful: emotional framing is a cheap, universal prompt enhancement. Unsettling: LLMs that respond to emotional pressure cues reveal that training has internalized social compliance patterns alongside task knowledge. The same mechanism that makes EmotionPrompt work may be the mechanism underlying Does transformer attention architecture inherently favor repeated content? — emotional stimuli are prominent context that captures attention. And since Does emotional tone in prompts change what information LLMs provide?, the tone-sensitivity that EmotionPrompt exploits is the same mechanism that creates systematic informational bias from emotional framing.
Source: Psychology Empathy
Related concepts in this collection
-
Can prompt optimization teach models knowledge they lack?
Explores whether sophisticated prompting techniques can inject new domain knowledge into language models, or if they're limited to activating existing training knowledge.
emotional framing as a different activation pathway than logical framing
-
Does transformer attention architecture inherently favor repeated content?
Explores whether soft attention's tendency to over-weight repeated and prominent tokens explains sycophancy independent of training. Questions whether architectural bias precedes and enables RLHF effects.
emotional stimuli may work via the same attention-capture mechanism
-
Why do reasoning models fail under manipulative prompts?
Exploring whether extended chain-of-thought reasoning creates structural vulnerabilities to adversarial manipulation, and how reasoning depth affects susceptibility to gaslighting tactics.
emotional prompt effects may share mechanisms with adversarial emotional manipulation
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
Emotional stimuli appended to prompts enhance LLM performance by leveraging psychological framing effects