Does Marxist alienation theory explain what AI does to cognitive work?
Marxist alienation frames AI as degrading authentic labor. But does that framework actually describe the shift happening with tokenization, or does it misdiagnose the transformation occurring in intelligence itself?
The leftmost critique of AI typically runs as alienation: AI takes cognitive labor that used to be performed by humans and converts it into a process humans are estranged from. The producer no longer recognizes the product as their own; the act of thinking becomes a wage-relation; cognitive labor is alienated in the same way physical labor was alienated under industrial capitalism. The critique is morally weighty and politically familiar.
The framing imports a presupposition that does not survive scrutiny. Marxist alienation analysis presumes a prior form of authentic labor that the new productive arrangement degrades. For physical labor, the prior form was craft: the artisan who controlled their tools, knew their materials, owned their product. For cognitive labor under AI, what is the prior form? Most cognitive labor in modern economies is already alienated in Marx's strict sense — knowledge workers do not control their cognitive tools (employer-supplied), do not own their cognitive products (employer-owned), and perform cognitive labor as wage-relation. The AI moment does not introduce alienation; alienation was already constitutive of the cognitive workplace.
What AI introduces is a different kind of change: a transformation in the form of the intelligence-good itself. Intelligence used to be a craft-residue inside alienated labor — even alienated knowledge workers produced intelligence-as-object (reports, analyses, code) that had craft elements they could recognize. Tokenization converts the intelligence-good from object-with-craft-residue to flow-without-craft-residue. The transformation is in what intelligence becomes, not in whether labor is alienated.
This is why the framing matters for diagnosis. Alienation critiques prescribe re-grounding labor in craft, ownership, and recognition — useful prescriptions for many problems but orthogonal to what tokenization is doing. The right diagnostic frame is medium-transformation (McLuhan, Ong) rather than alienation (Marx). Is the LLM a tool or a new form of intelligence itself? points to the same conclusion from the medium side.
The strongest counterargument: tokenization will produce its own form of alienation as workers lose what little craft-residue remained. This is plausible, but it is alienation as second-order effect, not as the primary descriptor of what AI is.
Source: Tokenization of Intelligence
Related concepts in this collection
-
Does AI actually commodify expertise or tokenize it?
The standard framing treats AI output like mass-produced commodities, but does AI's contextual, mutable nature fit better with token economics than commodity theory?
the categorical reframe that displaces the commodification/alienation framing
-
Is the LLM a tool or a new form of intelligence itself?
Does framing AI as merely delivering pre-existing intelligence miss what's actually happening? This explores whether the model itself constitutes a fundamentally new intelligence-medium with distinct cultural effects.
the medium-theoretic alternative to alienation analysis
-
Is AI fundamentally changing how value gets produced?
Rather than automating commodity production, does AI represent a shift from making identical stockpiled objects to generating contextual tokens on demand? And what makes this genuinely new?
the periodization that contextualizes why alienation analysis under-describes the moment
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI tokenization is transformation not degradation — Marxist alienation does not capture what is happening