Psychology and Social Cognition Language Understanding and Pragmatics

Can we defend modest mental attributions to large language models?

Do deflationist arguments decisively rule out ascribing beliefs and desires to LLMs, or do they beg the question? Exploring whether metaphysically undemanding mental states can be attributed without claiming consciousness.

Note · 2026-04-18 · sourced from Philosophy Subjectivity
What grounds language understanding in systems without embodiment? What kind of thing is an LLM really?

Two standard deflationist strategies against LLM mentality each fall short:

The robustness strategy challenges attributions on functional grounds — LLM behaviors fail to generalize appropriately, so putatively cognitive behaviors are not robust. But this begs the question by assuming that only human-like generalization patterns count as robust. Non-human animals have beliefs and desires despite non-human-like generalization profiles.

The etiological strategy appeals to causal history — LLMs are trained on next-token prediction, not on learning about the world, so their behaviors should not be interpreted mentalistically. But this also begs the question: the causal history of a system does not straightforwardly determine what mental states (if any) it instantiates. Evolution optimized for reproductive fitness, not for truth — yet we attribute beliefs to evolved creatures.

The modest position: Ascribe mentality where the mental states at issue are metaphysically undemanding (beliefs, desires, knowledge) — concepts that already have broad application across species and don't require phenomenal consciousness. Withhold attribution for metaphysically demanding states (qualia, phenomenal experience). This mirrors how we attribute beliefs to non-human animals without claiming equivalence.

This directly challenges the Chalmers engagement's framing. Since Should AI alignment target preferences or social role norms?, the question of LLM mentality is not binary (has mind / doesn't have mind) but graded and domain-specific. The modest inflationist position creates trouble for both sides of the debate — deflationists who dismiss all attribution, and inflationists like Chalmers who want to extend consciousness.

Since Does AI generate genuine utterances or just text patterns?, modest inflationism might be what happens at the receiving end: users attribute beliefs and desires (metaphysically undemanding) to LLMs precisely because the conversational structure makes such attributions pragmatically useful, regardless of whether they are metaphysically accurate.


Source: Philosophy Subjectivity Paper: Deflating Deflationism

Related concepts in this collection

Concept map
14 direct connections · 127 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

modest inflationism about LLM mentality is defensible — both deflationist debunking strategies fail to decisively rule out metaphysically undemanding mental state attributions