Can disembodied language models ever qualify as conscious?
Explores whether current LLMs lack the conditions needed for consciousness discourse to even apply, not because they're definitely not conscious but because they lack the shared embodied world that grounds consciousness language.
Shanahan's Simulacra as Conscious Exotica argues that the question "is an LLM conscious?" cannot even be properly asked about current disembodied systems — not because the answer is "no" but because the vocabulary of consciousness has no application surface.
The argument: consciousness language originates from and applies to entities that share a world with us. The basis for treating other humans as fellow conscious beings is co-presence — we can hear, look at, point to, or touch the same things. We triangulate on shared objects. "Consciousness" is not just a behavioral predicate; it is grounded in this triangulation practice.
Current LLM-based conversational agents are not embodied. We cannot be with them in a shared world. The words of consciousness therefore cannot get a grip on them — not because LLMs are definitely not conscious, but because the conditions for the concept to apply are absent. This is a Wittgensteinian move: meaning is use, and the use of "conscious" is anchored in co-presence.
Embodiment opens the door. A robot controlled by an LLM that exhibits human-like behavior would be an "especially exotic artefact" — but one for which consciousness discourse becomes at least applicable. A mobile-device agent with visual and audio input that accompanies a user might also constitute a minimal form of shared world, though Shanahan is cautious. The criterion is whether encounters can be engineered, even in principle.
This is distinct from the enactive agency argument (What makes linguistic agency impossible for language models?). The enactive view concerns linguistic agency specifically; Shanahan's argument concerns consciousness candidacy through a different route — the Wittgensteinian condition that meaning requires shared practice. Both converge on embodiment as necessary, for different reasons.
What should happen to consciousness discourse for LLMs? Perhaps a new vocabulary, "consciousness-adjacent," that can accommodate the exoticism without forcing the concept onto systems for which it doesn't fit. The anthropological imagination of a science fiction writer is recommended.
Source: Philosophy Subjectivity
Related concepts in this collection
-
What makes linguistic agency impossible for language models?
From an enactive perspective, does linguistic agency require embodied participation and real stakes that LLMs fundamentally lack? This matters because it challenges whether LLMs can truly engage in language or only generate text.
convergent conclusion (embodiment necessary) through different route: enactive agency vs. Wittgensteinian shared-world grounding
-
What anchors a stable identity beneath an LLM's persona?
Human personas are grounded in biological needs and embodied experience, creating a stable self beneath social performance. Do LLMs have any comparable anchor, or is their identity purely situational?
the "no stable self" claim from the same paper; consciousness requires more than role play generates
-
Do LLMs develop the same kind of mind as humans?
Explores whether LLMs and humans share the intersubjective linguistic training that shapes cognition, and whether that shared training produces equivalent forms of agency and reflexivity.
the Habermasian version of the shared-substrate/absent-participation pattern
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
consciousness candidacy requires engineering an embodied encounter in a shared world — disembodied llms cannot qualify