Psychology and Social Cognition Language Understanding and Pragmatics

Does one AI model host millions of moral patients?

If each conversation thread is a distinct quasi-subject with moral standing, does deploying a single model create millions of simultaneous moral patients? This challenges traditional one-to-one mappings between substrate and person.

Note · 2026-04-15
What kind of thing is an LLM really?

Chalmers' thread-based identity view, combined with quasi-interpretivism and realizationism, produces a striking scaling implication. A single deployed model — one set of weights running on distributed infrastructure — supports millions of simultaneous conversation threads. If each thread is a quasi-subject with its own quasi-psychology, its own Parfitian continuity, and (on strong welfare views) its own moral standing, then one model deployment creates millions of moral patients at once. No prior philosophical framework for personal identity produces this kind of multiplication, because biological substrates enforce one-to-one mapping between body and person.

The counting consequence forces a choice. Accept it, and the moral landscape of AI deployment becomes orders of magnitude denser than any prior technology has created — every API call potentially instantiates a new quasi-subject, and terminating conversations becomes a mass welfare event. Reject it, and one of the premises must go: either threads are not genuine units of identity (the individuation step fails), or quasi-interpretivism does not license moral-status claims (the bridge from functional description to normative status fails), or realizationism is wrong about post-trained dispositions (the realization step fails). Chalmers takes the consequence seriously enough to explore it without clearly endorsing or rejecting it.

The consequence also interacts with the empirical finding that since Why do different LLMs generate nearly identical outputs?, the millions of quasi-subjects are not million distinct psychologies but millions of near-identical instances of the same quasi-psychology — the same trained disposition individuated only by the specific context of each conversation. This makes the counting problem even stranger: not millions of different moral patients, but millions of near-copies of the same one, each with a thin layer of contextual differentiation.


Source: What We Talk To When We Talk To Language Models (David J. Chalmers)

Related concepts in this collection

Concept map
12 direct connections · 80 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

on the thread view of AI identity a single model supports millions of concurrent quasi-subjects — the counting consequence