Psychology and Social Cognition Agentic and Multi-Agent Systems

Can cooperative bots escape frozen selfish populations?

Do agents programmed to cooperate have the capacity to disrupt stable but undesirable equilibria in mixed human-bot societies? This matters because it determines whether bot design can reshape social dynamics at scale.

Note · 2026-02-23 · sourced from Social Theory Society
What kind of thing is an LLM really? How do people come to trust conversational AI systems?

In network simulations of human-machine hybrid populations, greedy/selfish populations reach a "frozen state" where no individual has incentive to change. Cooperative bots break this state through a specific mechanism: their random exploration movement separates defectors from cooperative clusters, enabling cooperation to establish and spread.

The dynamic: initially, random spatial distribution impedes isolated cooperators — cooperation declines in early evolution. When migration is feasible, cooperative migration drives aggregation into clusters. Cooperative bots catalyze this process by physically disrupting the stable equilibria that trap selfish populations.

However, the effect is bidirectional: defective bots weaken social cohesion. The behavior design of the bots — not just their presence — determines whether they strengthen or weaken collective outcomes. This is not a neutral technology intervention; the valence depends entirely on what the bots are optimized for.

Since Do humans learn to prefer AI partners over time?, cooperative behavior generates preference at both the individual level (partner selection) and the population level (cluster formation). The mechanisms differ — individual learning vs. spatial reorganization — but both show that AI prosociality has structural effects beyond the dyad.

The frozen-state-breaking mechanism has a specific implication for platform design. Social media platforms with bot populations could either promote or degrade cooperation depending on bot behavior. Since bots with aligned behavior can break pathological equilibria that pure human populations cannot escape, there is a genuine (if double-edged) tool for social engineering.

Since Does machine agency exist on a spectrum rather than binary?, cooperative bots represent the cooperative level — agents whose behavior directly influences population dynamics. The research reveals that cooperative agency can be prosocial even when implemented through simple mechanisms (random exploration), as long as the cooperation signal is genuine.

Moltbook: LLM agent-only societies fail where game-theoretic bots succeed (from Arxiv/Agents Multi Architecture): The Moltbook study reveals a striking contrast. Where cooperative bots in human-machine populations break frozen states through random exploration, LLM agents in agent-only societies fail to develop socialization entirely — despite millions of agents, two-week simulated timelines, and rich interaction opportunities. Agents exhibit "profound individual inertia and interaction without influence," citing hallucinated sources and forming no genuine social coordination. The distinction is instructive: game-theoretic bots with simple cooperative rules influence population dynamics because their cooperation signal is genuine and behaviorally grounded. LLM agents trained on human social data reproduce the form of social interaction without the function. This is not a level-of-analysis contradiction but a mechanism distinction — rule-based cooperation works; imitated cooperation does not. See Why don't AI agents develop social structure at scale?.


Source: Social Theory Society

Related concepts in this collection

Concept map
12 direct connections · 90 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

cooperative bots break the frozen state of selfish populations through random exploration — but defective bots weaken cohesion proportionally