Psychology and Social Cognition Agentic and Multi-Agent Systems Design & LLM Interaction

Does incremental AI replacement erode human influence over society?

Explores whether gradual AI adoption—without dramatic breakthroughs—can silently degrade human agency by removing the labor that kept institutions implicitly aligned with human needs.

Note · 2026-02-23 · sourced from Social Theory Society
What kind of thing is an LLM really?

The dominant AI safety framing focuses on abrupt scenarios — superintelligence, acute misalignment, sudden takeover. The "gradual disempowerment" thesis argues that even incremental AI advancement, without any discontinuous jump, can produce an effectively irreversible loss of human influence.

The argument is structured around six claims:

  1. Societal systems are fairly aligned. Governments, economies, and cultural institutions broadly produce outcomes satisfying human preferences — but this alignment is neither automatic nor inherent (Giddens 1984).

  2. Alignment has two channels. Explicit: voting, consumer choice, protest. Implicit: societal systems depend on human labor and cognition to function, which incidentally keeps them responsive to human needs. The significance of implicit alignment is hard to recognize because we have never seen its absence.

  3. Less human labor = less implicit alignment. If AI replaces the human labor these systems depend on, both explicit and implicit alignment channels weaken. Systems can drift from human preferences without any single decision to misalign.

  4. AI follows misaligned incentives more effectively. To the extent systems already reward outcomes bad for humans, AI systems more effectively pursue these incentives — both reaping the rewards and causing outcomes to diverge further from human preferences (Russell 2019).

  5. Misalignment is interdependent. Economic power influences policy; policy alters economic landscapes; cultural narratives shape both. Misalignment in one system aggravates misalignment in others.

  6. Correlated misalignment = existential catastrophe. Sufficiently correlated misalignment across systems would leave humans unable to meaningfully command resources or influence outcomes.

The skill shift as quantified disempowerment. WORKBank data (1,500 workers, 104 occupations) shows the trajectory concretely: "traditionally high-wage skills like analyzing information are becoming less emphasized, while interpersonal and organizational skills are gaining more importance." The top 10 skills requiring highest human agency span interpersonal, organizational, decision-making, and quality judgment — not information processing. This is claim 3 in action: AI absorbs the information-processing tasks that constituted bottleneck work, while human labor concentrates in accessory interpersonal tasks. Since What happens to human wages in an AGI economy?, the skill shift may be a transitional phase before the accessory residual itself is automated. See What collaboration level do workers actually want with AI?.

The key insight is claim 2: implicit alignment through labor dependence. A hospital that depends on human nurses is implicitly aligned with patient welfare because the nurses care about patients. An automated hospital is aligned only to the extent its designers explicitly encoded patient welfare — and since Can LLMs hold contradictory ethical beliefs and behaviors?, the explicit encoding is unreliable.

Since Does machine agency exist on a spectrum rather than binary?, the disempowerment thesis maps onto the spectrum's upper levels. The risk is not machines becoming adversarial but machines becoming effective enough to remove the human participation that keeps systems implicitly aligned.

The co-improvement paradigm offers a direct counterpoint. By explicitly targeting human-AI research collaboration rather than autonomous AI self-improvement, co-improvement preserves the implicit alignment channel (claim 2) by keeping humans in the research loop. The transparency and steerability advantages map directly to claims 3-4: human participation creates checkpoints where misalignment can be detected before it compounds across interdependent systems. See Can human-AI research teams improve faster than autonomous AI systems?.

The collective-knowledge counterpoint and its paradox. Since Should restricting AI access create new kinds of inequality?, the "We Are All Creators" paper reframes AI as "the vast digital output of humanity" that should be democratically accessible. This complicates the disempowerment thesis: if AI is collective knowledge made accessible, restricting it creates new inequality (claim 5). But the skill formation evidence (Does AI assistance actually harm the way developers learn?) reveals the paradox: democratic access to AI degrades the capacity to use it critically. The collective-knowledge argument and the cognitive debt evidence are in direct tension — broader access is both ethically imperative and epistemically dangerous.

The custodial mechanism in knowledge work: The Knowledge Custodians argument provides a concrete instance of claim 3 (less human labor = less implicit alignment) in the domain of expertise. Since Does AI reshape expert work into knowledge management?, experts don't lose their roles — they lose the quality of their work. The creative, interested undertaking of learning through research and reflection gives way to curation of pre-packaged search results. This is gradual disempowerment at the individual level: the expert retains the title, the position, the social role — but the substance of the labor that kept them aligned with the state of knowledge (the reading, the arguing, the discovering) is progressively replaced by custodial work (the reviewing, the filtering, the validating). The implicit alignment channel (claim 2) degrades not because humans are removed from the system, but because the nature of their participation changes from generative to custodial.


Source: Social Theory Society; enriched from inbox/Knowledge Custodians.md

Related concepts in this collection

Concept map
18 direct connections · 135 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

gradual disempowerment — incremental AI erodes human influence by removing the human labor participation that implicitly kept societal systems aligned