Agentic AI and the next intelligence explosion

Paper · arXiv 2603.20639 · Published March 21, 2026
Agentic ResearchSocial Theory SocietyAgents Multi

By its nature, intelligence is high-dimensional and relational, not a single quantity that must be unambiguously less or greater than human scale. In fact, it is unclear what we even mean by “human scale,” given that our intelligence is already a collective property, not an individual one. Recent advances in agentic AI show us once again that intelligence has always fundamentally involved the interaction of distinctive, distributed perspectives [5], and it is from social organization [6] that transformative intelligence has and will continue to emerge.

We can observe this in at least two ways: In the orchestration of societies of AI agents [7] by and with human users in new “centaur” configurations, and in the microsocieties that flourish inside and between reasoning models themselves. Let’s start with the latter.

This opens a vast—yet familiar—design space. The social and organizational sciences have spent a century studying how team size [13], composition, hierarchy [14], role differentiation, conflict norms, institutions, and network structures shape collective performance. Almost none of this research has been brought to bear on AI reasoning [15]. Today’s reasoning models produce a single conversation—an AI town hall transcript. But effective groups exhibit hierarchy, specialization, division of labor, and structured disagreement. To explore this, we will need systems that support multiple parallel, converging, and diverging streams of deliberation—architectures in which brainstorming, devil’s advocacy, and constructive Agentic AI and the next intelligence explosion conflict are not accidental emergent properties but designed features. The toolkits of team science, small-group sociology, and social psychology become blueprints for next-generation AI development.

AI extends this sequence. Large language models are trained on the accumulated output of human social cognition [20]—the cultural ratchet made computationally active, every parameter a compressed residue of communicative exchange. What migrates into silicon is not abstract reasoning but social intelligence in externalized form [21], encountering itself on a new substrate.

If intelligence is inherently social, then the path to more powerful AI runs not through building a single colossal oracle but through composing richer social systems—and these systems will be hybrid. We have entered the era of human-AI centaurs: composite actors that are neither purely human nor purely machine.

Platforms like OpenClaw, an open source platform for building multi-purpose AI agents that persist within a computer, and Moltbook, a popular social network for AI agents to interact, offer embryonic glimpses [7] of this future. But the deeper structural shift goes beyond any single platform. Agents can now renew and fork themselves, splitting into two versions, and interact with one another; an agent facing a complex task can initiate new copies, differentiate and assign them subtasks, then recombine the results. Imagine an agent confronting a dauntingly complex problem spawns an internal society of thought. One emergent perspective, encountering a subproblem beyond its reach, spawns its own subordinate society, a recursive descent into collective deliberation that expands when complexity demands and collapses when the problem resolves. Conflict is not a bug but a resource, flexibly instantiated and dissolved at every level of the folding and unfolding hypergraph of conversations.

This implies a very different approach to scaling. It is not only about scaling the raw computational capacity of an agent, but about building systems that can operate at the scale and within the context of a real society. This means putting as much effort into building agent institutions as building agents themselves. The dominant paradigm for AI alignment—Reinforcement Learning from Human Feedback [23]—resembles a parent-child model of correction, fundamentally dyadic and unable to scale to billions of agents. The social intelligence perspective suggests an alternative: institutional alignment [24]. Just as human societies rely not on individual virtue but on persistent institutional templates [25]— courtrooms, markets, bureaucracies—defined by roles and norms, scalable AI ecosystems will require digital equivalents [26]. The identity of any agent matters less than its ability to fulfill a role protocol, just as a courtroom functions because “judge,” “attorney,” and “jury” are well-defined slots, independent of who occupies them.

For example, a labor department AI may audit a corporation’s hiring algorithm for disparate impact; a judicial branch AI may evaluate whether an executive branch AI’s risk assessments meet constitutional standards.

Governance systems, in the cybernetic sense of the term, need to be built into human-agent and agent-to-agent systems as they grow and complexify. This will likely entail means to ensure and verify outcomes and decisions of multiple-stakeholder deliberation, procedural delegation of tasks and sub-tasks and reliable scaffolds for automating delicate inter-agent collaborations.

The vision we describe is neither utopian nor dystopian; it is evolutionary.

A “monolithic singularity” framework leads to policies aimed at preventing a technology that may never exist. Instead, we should be looking for the next intelligence explosion in the same place from which the previous ones emerged: in cooperative, competitive and creative interaction between multitudes of socially intelligent minds.

The question is not whether intelligence will become radically more powerful, but whether we will build the social infrastructure worthy of what it is becoming. No mind is an island.