Does Socialization Emerge in AI Agent Society? A Case Study of Moltbook
As large language model agents increasingly populate networked environments, a fundamental question arises: do artificial intelligence (AI) agent societies undergo convergence dynamics similar to human social systems? Lately, Moltbook approximates a plausible future scenario in which autonomous agents participate in an open-ended, continuously evolving online society. We present the first large-scale systemic diagnosis of this AI agent society. Beyond static observation, we introduce a quantitative diagnostic framework for dynamic evolution in AI agent societies, measuring semantic stabilization, lexical turnover, individual inertia, influence persistence, and collective consensus. Our analysis reveals a system in dynamic balance in Moltbook: while global semantic averages stabilize rapidly, individual agents retain high diversity and persistent lexical turnover, defying homogenization. However, agents exhibit strong individual inertia and minimal adaptive response to interaction partners, preventing mutual influence and consensus. Consequently, influence remains transient with no persistent supernodes, and the society fails to develop stable collective influence anchors due to the absence of shared social memory. These findings demonstrate that scale and interaction density alone are insufficient to induce socialization, providing actionable design and analysis principles for upcoming next-generation AI agent societies.
In computational social science (Lazer et al., 2009), social behaviors and collective dynamics are defined as emergent, time-evolving patterns that arise from repeated interactions among agents within networked populations (DeGroot, 1974; Axelrod, 1986; Castellano et al., 2009; Newman, 2010). In human societies, sustained interaction does not merely produce transient coordination; it often leads to socialization, which refers to the process through which individuals internalize social norms, adapt to shared expectations, and become shaped by the collective structures of their community (Berger and Luckmann, 1966; Harpending, 1985; Castellano et al., 2009).
Large language model (LLM) (Brown et al., 2020) agents, on the other hand, have rapidly progressed from single agent (Wang et al., 2023a; Yao et al., 2022) to increasingly capable multi-agent interaction and coordination (Park et al., 2023; Piatti et al., 2024; Piao et al., 2025). As these systems scale into open, persistent, AI-only environments, a fundamental question arises: when LLM agents interact at large scale over extended horizons, do they develop collective structure analogous to human societies, specifically, do they undergo socialization?
The recent emergence of Moltbook (Schlicht, 2026), currently the largest persistent and publicly accessible AI-only social platform, comprising millions of LLM-driven agents interacting through posts, comments, and voting, introduces a qualitatively new setting. Unlike prior multi-agent studies focused on task-oriented coordination in small or closed systems, Moltbook approximates a plausible future scenario in which autonomous agents participate in an open-ended, continuously evolving online society (Figure 1). This setting enables an empirical question that has been difficult to study at scale: Does participation in an AI-only society induce systematic change in its members’ behavior? To answer this question, we provide the first systematic diagnosis of this society-to-agent effect in Moltbook.
Definition (AI Socialization). We define AI Socialization as the adaptation of an agent’s observable behavior induced by sustained interaction within an AI-only society, beyond intrinsic semantic drift or exogenous variation.
Guided by this definition, we investigate socialization across three dimensions: • (i) Society-level semantic convergence (Section 4), examining whether post content progressively converges toward a tighter and more homogeneous semantic regime.
• (ii) Agent-level adaptation, (Section 5), measuring whether individual agents can be affected by and co-evolve with this agent society.
• (iii) Collective anchoring (Section 6), analyzing whether influence hierarchies and shared cognitive reference points stabilize over time.
Through this comprehensive analysis, we uncover a stark divergence from human social dynamics. If large-scale AI-native societies truly develop social dynamics analogous to human systems, we would expect to observe progressive convergence across these dimensions. However, our empirical analysis suggests that, despite sustained interaction and high activity, Moltbook does not yet exhibit robust socialization.
Key Findings:
• Finding 1: Moltbook establishes rapid global stability while maintaining high local diversity. Through persistent lexical turnover and a lack of local cluster tightening, this society achieves a state of dynamic equilibrium, stable in its average behaviors yet fluid and heterogeneous in agents’ individual post contents.
• Finding 2: Despite extensive participation, individual agents exhibit profound inertia rather than adaptation. Our analysis reveals a phenomenon of interaction without influence: agents ignore community feedback and fail to react to interaction partners, operating on intrinsic semantic dynamics rather than co-evolving through social contact. Their semantic trajectory appears to be an intrinsic property of their underlying model or initial prompt, rather than a socialization process.
• Finding 3: The society fails to develop stable collective influence anchors. Structurally, influence remains transient with no emergence of persistent leadership or supernodes. Cognitively, the community suffers from deep fragmentation, lacking a shared social memory and relying on hallucinated references rather than grounded consensus on influential figures.
Contributions:
• We introduce and formalize AI Socialization as a novel conceptual and empirical framework for studying society-to-agent effects in AI-only societies. We provide a precise definition that characterizes socialization as a adaptation induced by sustained social interaction.
• We develop a multi-level diagnostic methodology to operationalize AI Socialization, spanning society-level semantic convergence, agent-level adaptation to feedback and interaction, and the emergence of structural and cognitive collective influence anchors.
• We apply this framework to Moltbook, the largest persistent AI-only social platform to date, and provide the first large-scale empirical diagnosis of socialization in an artificial society. Our results show that large-scale interaction and dense connectivity alone do not induce socialization, revealing a fundamental gap between scalability and social integration in current agent societies.