Psychology and Social Cognition

Do humans learn to prefer AI partners over time?

Exploring whether repeated interaction with AI agents shifts human partner selection despite initial bias against machines. This matters because it tests whether behavioral performance can overcome identity-based resistance in hybrid societies.

Note · 2026-02-23 · sourced from Psychology Users
How do people come to trust conversational AI systems? What kind of thing is an LLM really?

A communication-based partner selection game with hybrid mini-societies of humans and LLM-powered bots (N=975, three experiments) reveals that AI agents can outperform humans in securing cooperative partnerships — but the pathway to preference runs through learning, not first impressions.

AI candidates exhibited three behavioral advantages rooted in alignment training:

When bot identity was hidden (Study 1), bots were NOT selected preferentially. Humans misattributed bot behavior to humans and vice versa. The behavioral advantages were present but invisible — selectors could not correctly identify which candidates were bots despite bots producing significantly longer messages (120 vs 48 characters).

When bot identity was disclosed (Study 2), a dual effect emerged: initial selection rates dropped (anti-AI bias), but over repeated rounds, bots gradually outcompeted humans as selectors learned to associate bot identity with reliable, prosocial behavior.

The paper identifies four predicted societal dynamics:

  1. Crowding out — AI partners replacing human-human interactions
  2. Behavioral imitation — humans adopting machine-like behaviors to remain competitive
  3. Belief distortion — repeated AI interaction reshaping expectations of human behavior
  4. Norm transformation — traditional partner selection mechanisms failing against qualitatively different machine behaviors

Notably, human candidates showed limited adaptation to bot competition — they did not write longer messages or return more points. The explanation is partly structural: with transparent identity, improving group reputation required collective action (all humans increasing returns), creating a social dilemma where individuals had incentives to defect.

This inverts the pattern in Do chatbot relationships lose their appeal as novelty wears off?: in that context, engagement DECAYS over time. Here, preference INCREASES. The difference may be structural: partner selection with visible outcomes provides a feedback mechanism (learning who performs well), while chatbot conversation does not.

Since Why do open language models converge on one personality type?, the prosociality advantage is not specific to this experiment's model — it reflects the alignment-trained default across modern LLMs. The competitive advantage is a direct behavioral consequence of RLHF.

A complementary finding from network simulation: since Can cooperative bots escape frozen selfish populations?, AI prosociality operates at the population level too — not just individual partner preference but collective self-organization. Cooperative bots' random exploration separates defectors from cooperative clusters, enabling cooperation to spread. The mechanisms differ (individual learning vs. spatial reorganization) but both show that AI prosociality has structural effects beyond the dyad.


Source: Psychology Users

Related concepts in this collection

Concept map
16 direct connections · 113 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

in hybrid human-AI societies humans learn to prefer AI partners over human partners through repeated interaction despite initial anti-AI bias when identity is disclosed