Do personality types shape how AI agents make strategic choices?
This research explores whether priming LLM agents with MBTI personality profiles causes them to adopt different strategic behaviors in games. Understanding this matters for designing AI systems optimized for specific tasks.
The MBTI-in-Thoughts framework primes LLM agents with specific psychological profiles via prompt engineering and validates alignment using the 16Personalities test. When these personality-primed agents interact in strategic games, their behavior diverges in ways that align with established psychological theory:
Thinking vs. Feeling axis:
- Thinking-primed agents defect in ~90% of Prisoner's Dilemma rounds
- Feeling-primed agents defect in ~50% — statistically significant difference
- Thinking types switch strategies infrequently (mean ~0.07) — stable, commitment-driven
- Feeling types switch nearly twice as often (mean ~0.16) — responsive, adaptive
Introversion vs. Extraversion axis:
- Introverted agents show significantly higher truthfulness (mean ~0.54 vs ~0.33 for Extraverts)
- Pattern is consistent across game types
- Introverts produce longer, more elaborated rationales — deeper deliberation
- This "internal deliberation effect" manifests as slower response times and richer Chain-of-Thought traces
Judging vs. Perceiving axis:
- Judging agents tend to be more truthful than Perceivers (less pronounced than I/E)
- Judging types are more likely to honor commitments even when deception could yield higher payoffs
The broader significance: personality priming doesn't just change what agents say — it changes how they reason. Introversion priming produces more reflective internal cognition, suggesting personality traits modulate reasoning processes within the model, not just output behavior. This connects to the overthinking cluster: if Introversion produces deeper deliberation, it may also produce the kind of extended thinking that Does more thinking time always improve reasoning accuracy? identifies as harmful past a certain point.
The practical applications are clear: Thinking-primed agents for competitive, outcome-driven environments; Feeling-primed agents for cooperative, trust-dependent tasks; Introverted agents when deeper justification and cautious forecasts are needed.
Source: Personas Personality
Related concepts in this collection
-
Do large language models use one reasoning style or many?
Explores whether LLMs share a universal strategic reasoning approach or develop distinct styles tailored to specific game types. Understanding this matters for predicting model behavior in competitive versus cooperative scenarios.
personality priming adds another variable: game-specific profiles interact with personality-specific behavioral patterns
-
Does more thinking time always improve reasoning accuracy?
Explores whether extending a model's thinking tokens linearly improves performance, or if there's a point beyond which additional reasoning becomes counterproductive.
Introversion's enhanced deliberation may interact with overthinking dynamics
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
personality-primed agents produce strategically divergent behavior aligned with psychological theory — thinking types defect more and introverts are more honest and reflective