Did Chalmers abandon his own Extended Mind principles?
Chalmers co-authored the Extended Mind thesis, which grounds cognition in relational integration across brain and environment. Does his 2026 account of LLM interlocutors contradict this foundational commitment by localizing mind inside the AI?
Clark and Chalmers (1998) argued that cognitive processes can extend beyond the biological brain into the environment when external resources play the right functional role — Otto's notebook is part of his memory system. The thesis commits its authors to a relational-constitution picture: what counts as a cognitive system is determined by functional integration, not by skin-and-skull boundaries. Where the process is depends on what participates in it.
The 2026 Chalmers asks what the LLM interlocutor is and locates it in the virtual model instance — an entity specified by the AI system's computational pattern. But the conversational context that specifies the virtual instance is jointly produced by human and AI. On his own 1998 principles, the cognitive system is the relational complex (user + context + model + infrastructure), not the AI side alone. To locate the interlocutor inside the AI is to draw exactly the kind of skin-and-skull boundary the Extended Mind thesis was designed to dissolve.
The move is philosophically potent because it uses Chalmers against Chalmers — not a different philosopher's framework, but his own earlier commitment. The 2026 paper is internally inconsistent with the 1998 paper unless one of two things is true: either the 1998 thesis was wrong (but Chalmers has not retracted it), or the 2026 application fails to apply the 1998 principles to the LLM case. The second option is the more natural reading: the virtual-instance account implicitly adopts the internalist picture the Extended Mind thesis rejected.
Source: AI Generated Research/Chalmers Engagement/project-brief.md
Related concepts in this collection
-
What actually specifies a virtual instance in conversation?
If Chalmers locates the LLM interlocutor in a persistent virtual instance, what component—the model, the infrastructure, or the conversation—actually makes that instance this one and not another?
the decomposition this twist supports
-
What kind of entity are we actually talking to when using an LLM?
When you converse with an LLM, are you addressing the model itself, the hardware running it, or something else? Understanding what the interlocutor really is matters for questions about identity, responsibility, and continuity.
the Chalmers taxonomy being critiqued
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
the Extended Mind thesis used against its co-author — Clark and Chalmers 1998 committed to relational constitution that the 2026 Chalmers abandons for LLMs