Psychology and Social Cognition Language Understanding and Pragmatics

Can disembodied language models ever qualify as conscious?

Explores whether current LLMs lack the conditions needed for consciousness discourse to even apply, not because they're definitely not conscious but because they lack the shared embodied world that grounds consciousness language.

Note · 2026-02-21 · sourced from Philosophy Subjectivity
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Shanahan's Simulacra as Conscious Exotica argues that the question "is an LLM conscious?" cannot even be properly asked about current disembodied systems — not because the answer is "no" but because the vocabulary of consciousness has no application surface.

The argument: consciousness language originates from and applies to entities that share a world with us. The basis for treating other humans as fellow conscious beings is co-presence — we can hear, look at, point to, or touch the same things. We triangulate on shared objects. "Consciousness" is not just a behavioral predicate; it is grounded in this triangulation practice.

Current LLM-based conversational agents are not embodied. We cannot be with them in a shared world. The words of consciousness therefore cannot get a grip on them — not because LLMs are definitely not conscious, but because the conditions for the concept to apply are absent. This is a Wittgensteinian move: meaning is use, and the use of "conscious" is anchored in co-presence.

Embodiment opens the door. A robot controlled by an LLM that exhibits human-like behavior would be an "especially exotic artefact" — but one for which consciousness discourse becomes at least applicable. A mobile-device agent with visual and audio input that accompanies a user might also constitute a minimal form of shared world, though Shanahan is cautious. The criterion is whether encounters can be engineered, even in principle.

This is distinct from the enactive agency argument (What makes linguistic agency impossible for language models?). The enactive view concerns linguistic agency specifically; Shanahan's argument concerns consciousness candidacy through a different route — the Wittgensteinian condition that meaning requires shared practice. Both converge on embodiment as necessary, for different reasons.

What should happen to consciousness discourse for LLMs? Perhaps a new vocabulary, "consciousness-adjacent," that can accommodate the exoticism without forcing the concept onto systems for which it doesn't fit. The anthropological imagination of a science fiction writer is recommended.


Source: Philosophy Subjectivity

Related concepts in this collection

Concept map
14 direct connections · 117 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

consciousness candidacy requires engineering an embodied encounter in a shared world — disembodied llms cannot qualify