Psychology and Social Cognition Language Understanding and Pragmatics

Does Marxist alienation theory explain what AI does to cognitive work?

Marxist alienation frames AI as degrading authentic labor. But does that framework actually describe the shift happening with tokenization, or does it misdiagnose the transformation occurring in intelligence itself?

Note · 2026-04-14
What do language models actually know? What happens to social order when AI removes ritual constraints?

The leftmost critique of AI typically runs as alienation: AI takes cognitive labor that used to be performed by humans and converts it into a process humans are estranged from. The producer no longer recognizes the product as their own; the act of thinking becomes a wage-relation; cognitive labor is alienated in the same way physical labor was alienated under industrial capitalism. The critique is morally weighty and politically familiar.

The framing imports a presupposition that does not survive scrutiny. Marxist alienation analysis presumes a prior form of authentic labor that the new productive arrangement degrades. For physical labor, the prior form was craft: the artisan who controlled their tools, knew their materials, owned their product. For cognitive labor under AI, what is the prior form? Most cognitive labor in modern economies is already alienated in Marx's strict sense — knowledge workers do not control their cognitive tools (employer-supplied), do not own their cognitive products (employer-owned), and perform cognitive labor as wage-relation. The AI moment does not introduce alienation; alienation was already constitutive of the cognitive workplace.

What AI introduces is a different kind of change: a transformation in the form of the intelligence-good itself. Intelligence used to be a craft-residue inside alienated labor — even alienated knowledge workers produced intelligence-as-object (reports, analyses, code) that had craft elements they could recognize. Tokenization converts the intelligence-good from object-with-craft-residue to flow-without-craft-residue. The transformation is in what intelligence becomes, not in whether labor is alienated.

This is why the framing matters for diagnosis. Alienation critiques prescribe re-grounding labor in craft, ownership, and recognition — useful prescriptions for many problems but orthogonal to what tokenization is doing. The right diagnostic frame is medium-transformation (McLuhan, Ong) rather than alienation (Marx). Is the LLM a tool or a new form of intelligence itself? points to the same conclusion from the medium side.

The strongest counterargument: tokenization will produce its own form of alienation as workers lose what little craft-residue remained. This is plausible, but it is alienation as second-order effect, not as the primary descriptor of what AI is.


Source: Tokenization of Intelligence

Related concepts in this collection

Concept map
13 direct connections · 79 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

AI tokenization is transformation not degradation — Marxist alienation does not capture what is happening