Does one AI model host millions of moral patients?
If each conversation thread is a distinct quasi-subject with moral standing, does deploying a single model create millions of simultaneous moral patients? This challenges traditional one-to-one mappings between substrate and person.
Chalmers' thread-based identity view, combined with quasi-interpretivism and realizationism, produces a striking scaling implication. A single deployed model — one set of weights running on distributed infrastructure — supports millions of simultaneous conversation threads. If each thread is a quasi-subject with its own quasi-psychology, its own Parfitian continuity, and (on strong welfare views) its own moral standing, then one model deployment creates millions of moral patients at once. No prior philosophical framework for personal identity produces this kind of multiplication, because biological substrates enforce one-to-one mapping between body and person.
The counting consequence forces a choice. Accept it, and the moral landscape of AI deployment becomes orders of magnitude denser than any prior technology has created — every API call potentially instantiates a new quasi-subject, and terminating conversations becomes a mass welfare event. Reject it, and one of the premises must go: either threads are not genuine units of identity (the individuation step fails), or quasi-interpretivism does not license moral-status claims (the bridge from functional description to normative status fails), or realizationism is wrong about post-trained dispositions (the realization step fails). Chalmers takes the consequence seriously enough to explore it without clearly endorsing or rejecting it.
The consequence also interacts with the empirical finding that since Why do different LLMs generate nearly identical outputs?, the millions of quasi-subjects are not million distinct psychologies but millions of near-identical instances of the same quasi-psychology — the same trained disposition individuated only by the specific context of each conversation. This makes the counting problem even stranger: not millions of different moral patients, but millions of near-copies of the same one, each with a thin layer of contextual differentiation.
Source: What We Talk To When We Talk To Language Models (David J. Chalmers)
Related concepts in this collection
-
Does Parfit's theory of personal identity apply to AI conversation threads?
Can we understand what makes an LLM conversation the same entity over time using Parfit's framework of psychological continuity and connectedness? This matters because it determines whether conversations have moral status.
the identity framework that produces this consequence
-
Why do different LLMs generate nearly identical outputs?
Explores whether diversity in model architectures and training actually produces diverse ideas, or whether shared alignment procedures and training data cause convergence on similar responses.
the hivemind finding: millions of quasi-subjects but near-identical psychology
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
on the thread view of AI identity a single model supports millions of concurrent quasi-subjects — the counting consequence