Do LLMs gain true linguistic agency through integration?
Explores whether LLMs can develop genuine linguistic agency—the capacity to be embodied, stake-bearing participants in meaning-making—as they become embedded in human language practices, or whether this requires fundamental architectural changes.
Two theoretical frameworks reach apparently contradictory conclusions about LLM language use. The Understanding AI (Schneider 2024) view: LLM social grounding increases gradually as LLMs are integrated into linguistic practices — analogous to children acquiring language community membership through participation. The more LLMs function as communicative partners, the more elementary social grounding they acquire. The enactive view ("Large Models of What?" 2024): linguistic agency requires embodiment, participation, and precariousness — properties "likely incompatible in principle with current architectures." The absence is categorical.
The apparent contradiction dissolves when the two views are recognized as naming different properties:
Social grounding (Schneider): a functional-social property acquired through participation in language games (Wittgenstein). It is relational — a matter of how the LLM is positioned in discourse practices by the communities that use it. This can increase through integration because the positioning changes even if the LLM does not.
Linguistic agency (enactive view): a constitutive property requiring the agent to be embedded in a world through a body, to have stakes in communicative outcomes (precariousness), and to participate in meaning-making as a reciprocal process. This cannot increase through integration because integration changes how others relate to the LLM, not whether the LLM has a body or precarious existence.
Both can be simultaneously true: LLMs acquire more social grounding as they are integrated AND remain categorically non-linguistic-agents in the enactive sense. The first is a claim about community practices; the second is a claim about constitutive architecture.
Since Does semantic grounding in language models come in degrees?, this distinction maps onto the grounding taxonomy: social grounding is the dimension that increases through use; enactive linguistic agency is the dimension that requires architectural change no amount of use can provide.
Source: Linguistics, NLP, NLU
Related concepts in this collection
-
Can LLMs acquire social grounding through linguistic integration?
Explores whether LLMs gradually develop social grounding as they become embedded in human language practices, analogous to child language acquisition. Tests whether grounding is a fixed property or an outcome of participatory use.
the gradual-acquisition pole; social grounding as relational property
-
What makes linguistic agency impossible for language models?
From an enactive perspective, does linguistic agency require embodied participation and real stakes that LLMs fundamentally lack? This matters because it challenges whether LLMs can truly engage in language or only generate text.
the categorical-absence pole; linguistic agency as constitutive property
-
Does semantic grounding in language models come in degrees?
Rather than asking whether LLMs truly understand meaning, this explores whether grounding is actually a multi-dimensional spectrum. The question matters because it reframes the sterile understand/don't-understand debate into measurable, distinct capacities.
provides the framework: the two poles correspond to different grounding dimensions, not competing claims about the same thing
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
social grounding and linguistic agency are distinct properties — llms acquire more of the former through integration while categorically lacking the latter