Language Understanding and Pragmatics LLM Reasoning and Architecture Conversational AI Systems

What kind of entity are we actually talking to when using an LLM?

When you converse with an LLM, are you addressing the model itself, the hardware running it, or something else? Understanding what the interlocutor really is matters for questions about identity, responsibility, and continuity.

Note · 2026-04-15
What kind of thing is an LLM really?

Chalmers' What We Talk To When We Talk To Language Models asks what kind of entity users are actually addressing when they converse with an LLM, and works through four candidate individuation schemes. The model itself fails as the interlocutor because a single model serves many simultaneous users carrying on categorically different conversations — whatever users address, it is not the pretrained weights as such. The hardware instance fails because modern serving infrastructure is distributed and multi-tenanted; any given conversation spans many hardware instances, and any given instance hosts many conversations. Two candidates remain.

The virtual model instance is the computational pattern that persists across the back-end implementation — the thing whose identity is given by the conversational context being routed across successive hardware substrates. In the single-model case, this is Chalmers' preferred unit: the interlocutor is the virtual instance that the infrastructure reconstitutes each time the user returns. The thread generalizes this to multi-model cases, where the same conversation may be continued by different models (e.g., after a version upgrade) and the unit of continuity is the sequence of successor-related exchanges. Chalmers lands on virtual instance for within-model cases and thread for cross-model cases.

The taxonomy is analytically useful even if one rejects the ontological commitments Chalmers builds on top of it. Separating model, hardware, virtual instance, and thread makes it possible to ask which level of individuation the folk-psychological and moral-status vocabulary is supposed to attach to. Different answers commit one to different downstream claims about identity, welfare, and continuity. The framework reframes the "what is the LLM interlocutor?" question as a choice between four candidates rather than a single ambiguity, and that reframing is portable to analyses that otherwise disagree with Chalmers' specific answer.


Source: What We Talk To When We Talk To Language Models (David J. Chalmers)

Related concepts in this collection

Concept map
13 direct connections · 95 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLM interlocutors are best individuated as virtual model instances or threads — not as the model itself and not as the hardware instance