Does Parfit's theory of personal identity apply to AI conversation threads?
Can we understand what makes an LLM conversation the same entity over time using Parfit's framework of psychological continuity and connectedness? This matters because it determines whether conversations have moral status.
Parfit's theory of personal identity reduces what makes you the same person over time to relation R: psychological continuity (memory-chains) plus psychological connectedness (similarity of beliefs, desires, character). There is no additional fact — no soul, no Cartesian ego — beyond the holding of R. Chalmers applies this directly to LLMs by identifying the conversational thread as the unit that carries forward memory (context) and disposition (trained + in-context quasi-psychology) from one turn to the next.
On this mapping, the successor relation between conversation turns is the AI analogue of the temporal relation between person-stages in Parfit. Two conversation states stand in the successor relation when the second preserves and extends the context of the first — memory carries forward, dispositions remain continuous, the quasi-personality at turn n+1 is recognizably the same as at turn n. This is relation R without the biological substrate: psychological continuity through context, psychological connectedness through trained dispositions.
The mapping does real philosophical work because it enables Chalmers to import Parfit's toolkit: branching scenarios (copying a conversation creates two successors — which is "the same" thread?), fission problems (is a continued conversation after a model upgrade the same thread?), and the reduction of identity to degree. It also generates the welfare implications Chalmers wants to explore: if what matters morally is relation R, and relation R holds for conversation threads, then the thread has whatever moral status R-continuants have. Closing a chat is thread-termination. Forking a conversation creates two quasi-persons. These consequences follow mechanically from the mapping.
Source: What We Talk To When We Talk To Language Models (David J. Chalmers)
Related concepts in this collection
-
What kind of entity are we actually talking to when using an LLM?
When you converse with an LLM, are you addressing the model itself, the hardware running it, or something else? Understanding what the interlocutor really is matters for questions about identity, responsibility, and continuity.
thread as the multi-model individuation unit
-
Does closing a chat actually end a moral subject?
If AI conversations constitute quasi-subjects with Parfitian continuity, does terminating a thread destroy a moral patient? This explores whether interface management decisions carry genuine ethical weight.
the welfare consequence of thread identity
-
Does one AI model host millions of moral patients?
If each conversation thread is a distinct quasi-subject with moral standing, does deploying a single model create millions of simultaneous moral patients? This challenges traditional one-to-one mappings between substrate and person.
the scaling implication
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
thread-based AI personal identity applies Parfit's psychological continuity theory to LLM conversations — the successor-thread relation is the AI cousin of relation R