Can we describe LLM beliefs without assuming consciousness?
Chalmers proposes quasi-interpretivism as a way to talk about LLM mental states using folk-psychological vocabulary while explicitly bracketing the question of phenomenal consciousness. Does this methodological device actually avoid consciousness-commitments?
Chalmers introduces quasi-interpretivism as the vocabulary for ascribing belief-like states to LLMs without committing to the claim that these systems are conscious. A system has a quasi-belief that p if its behavior is best interpreted by a rational-agent model that takes it to believe p. The "quasi-" prefix flags that the functional role is in place — the behavioral signature of belief, updated in appropriate ways by prompts and context — while the phenomenal question is explicitly set aside. The same move extends to quasi-desires, quasi-intentions, and quasi-psychology more broadly.
The device solves a specific methodological problem: folk-psychological vocabulary is the natural tool for describing coherent dialogue behavior, but applying it in the full sense imports consciousness-commitments the evidence does not support. Quasi-interpretivism gives Chalmers a middle way. One can say that the system "quasi-believes France is in Europe" and mean that its answers, revisions, and downstream inferences track the way a believing agent's would, without thereby claiming it feels anything. The prefix is a load-bearing hedge — every claim about LLM mental states in the paper takes this form, and the argumentative power of the analysis depends on the hedge being coherent.
The vocabulary has real utility for sub-personal functional states where behavior and structure can substitute for felt experience in specifying what a state does. It travels less well to states whose identity is partly relational or normative — communicative states, for instance, where being oriented toward mutual understanding is constitutive of the state rather than added to a prior functional substrate. Quasi-interpretivism works for belief because belief can be characterized functionally from a third-person stance; it does not obviously work for speech-acts, validity-claim-raising, or interlocutor-role-occupancy, which require a first-person stake the system does not have. The device is powerful within its proper scope and overreaches outside it.
Source: What We Talk To When We Talk To Language Models (David J. Chalmers)
Related concepts in this collection
-
Are RLHF personas performed characters or realized dispositions?
Explores whether dialogue agent personas installed through post-training constitute genuine quasi-psychological states or remain sustained pretense. The distinction matters for how we understand what these systems fundamentally are.
the affirmative application of quasi-interpretivism
-
Do LLMs develop the same kind of mind as humans?
Explores whether LLMs and humans share the intersubjective linguistic training that shapes cognition, and whether that shared training produces equivalent forms of agency and reflexivity.
parallel claim that LLMs have the substrate without the agent-level attribute
-
Should we treat dialogue agents as role-playing characters?
Does the role-play framing successfully avoid anthropomorphism while preserving folk-psychological vocabulary for describing LLM behavior? This matters because it shapes whether we attribute genuine mental states to dialogue systems.
Shanahan's alternative: folk-psychology attaches to the played character, not the system
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
quasi-interpretivism treats systems as having quasi-beliefs when behaviorally interpretable as believing — the prefix brackets consciousness without settling it