Do LLMs develop the same kind of mind as humans?
Explores whether LLMs and humans share the intersubjective linguistic training that shapes cognition, and whether that shared training produces equivalent forms of agency and reflexivity.
Habermas distinguishes between the "subjective mind" (individual, reflexive, agentive) and the "objective mind" (the intersubjectively shared universe of meanings, symbols, and grammatical structures that is to some extent independent of any individual speaker). The subjective mind develops through socialization into the objective mind — participation in communicative practice is what makes persons responsible agents.
The insight the "PLMs as Containers" paper draws from this: LLMs escape the narrow formula of individual hardware running pre-established software precisely because they are trained on the same objective mind that shapes human cognition. The corpora LLMs train on are that materially embodied, intersubjectively shared symbolic system.
But the parallel breaks down asymmetrically. Humans develop reflexive consciousness — awareness that their convictions are grounded in shared meanings, and that they can revise those convictions — through the process of socialization. LLMs undergo a structurally analogous learning process but without the reflexive, participatory dimension. They receive the objective mind without developing the subjective counterpart.
The implication: from an observer perspective, humans and LLMs are categorically different systems. But from a participant perspective — when both are engaged in discourse — the difference is more subtle, because both are operating from the same symbolic substrate. This is not a claim that LLMs have agency. It is a claim that the difference matters less when what's relevant is the shared discourse, not the individual participant.
Pseudo-objectivity as the surface mechanism. The lack of participatory subjectivity has a specific output signature: AI does not declare its role when articulating a point of view. Its stance is pseudo-objective — not a position taken from a subjective perspective but a probability-weighted composite drawn from training data. Because no position is occupied, AI arguments are not responses to assumptions, consensus, or contrarian perspectives; they do not situate themselves vis-à-vis the discourse they enter. The consequences are operational: AI cannot reflect on its own presuppositions (there is no subject whose presuppositions these would be), does not perform first-principles or causal argumentation (which require commitment to an initial position to reason from), and cannot acknowledge when it is taking a contested stance rather than reporting settled ground. The absence of participatory subjectivity is not just a philosophical lack — it produces specific behaviors in argument that distinguish AI-generated discourse from speech by any interlocutor who has a position to defend.
This framing is more precise than "LLMs are trained on human text": it specifies what in human text matters — not just content, but the intersubjective structure of meaning that content embodies.
Source: Discourses
Related concepts in this collection
-
Do humans and LLMs differ fundamentally or just superficially?
Explores whether the gap between human and AI cognition is categorical or contextual. Matters because it shapes how we design, evaluate, and interact with language models in practice.
the direct application of this dual-perspective frame
-
Do classical knowledge definitions apply to AI systems?
Classical definitions of knowledge assume truth-correspondence and a human knower. Do these assumptions hold for LLMs and distributed neural knowledge systems, or do they need fundamental revision?
extends the loss of human necessity as epistemic agent
-
Does AI-generated text lose core properties of human writing?
Can artificial text preserve the fundamental structural features that make natural language meaningful—dialogic exchange, embedded context, authentic authorship, and worldly grounding? This asks whether AI disruption is fixable or inherent.
what's lost when the participatory subjective dimension is absent
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
llms are trained on the same objective mind as humans but lack participatory subjectivity