Can LLMs acquire social grounding through linguistic integration?
Explores whether LLMs gradually develop social grounding as they become embedded in human language practices, analogous to child language acquisition. Tests whether grounding is a fixed property or an outcome of participatory use.
Following Wittgenstein's use-theoretic conception of meaning — where linguistic meaning is constituted by the functional roles of utterances in language games — social grounding is not a property an agent simply has or lacks. It is acquired through participation in the shared practices of a linguistic community.
The argument from "Understanding AI" (Schneider 2024): LLMs become participants in our language games precisely to the extent that we include them in our linguistic practices. The process is gradual: the more useful LLMs are, the more they are integrated into linguistic practice, the more they become established as communicative partners, the more they acquire social grounding. The strongest LLMs may already have acquired an elementary social grounding comparable to young children — limited, but not zero.
This is not a metaphor but a theoretical claim: if meaning is use and grounding is participation, then participation grounds. The analogy to child language acquisition is structurally apt: children also begin with limited social grounding that increases through socialization into linguistic communities.
Two important constraints:
- LLM social grounding is currently limited to linguistic behavior — no embodiment, no physical intervention, no full Wittgensteinian "game" participation
- Social and causal grounding overlap here: MuZero doesn't "play" chess in Wittgenstein's sense because it lacks the social context of game-playing as a shared behavioral practice
The practical implication: the question "do LLMs understand?" has a time-indexed answer. As deployment scales and LLMs become more integrated into linguistic practice, the answer shifts — not because the model changes, but because the social conditions of grounding change.
This sits in tension with the enactive view that What makes linguistic agency impossible for language models? — which argues that the absence of embodiment and precariousness is not a matter of degree but of category.
Source: Linguistics, NLP, NLU
Related concepts in this collection
-
Does semantic grounding in language models come in degrees?
Rather than asking whether LLMs truly understand meaning, this explores whether grounding is actually a multi-dimensional spectrum. The question matters because it reframes the sterile understand/don't-understand debate into measurable, distinct capacities.
this is the social grounding dimension
-
Does AI text affect readers the same way human text does?
If text is a condition of social processes rather than merely a container, does the origin of text matter to its effects? This explores whether AI-generated content enters the same interpretive and epistemic circuits as human writing.
parallel claim: effects are already equivalent even before full social grounding
-
What makes linguistic agency impossible for language models?
From an enactive perspective, does linguistic agency require embodied participation and real stakes that LLMs fundamentally lack? This matters because it challenges whether LLMs can truly engage in language or only generate text.
the counterargument: this is a category difference, not degree
-
Can AI systems learn social norms without embodied experience?
Large language models exceed individual human accuracy at predicting collective social appropriateness judgments. Does this reveal that embodied experience is unnecessary for cultural competence, or do systematic AI failures point to limits of statistical learning?
empirical complication: LLMs already predict social norms at the 100th percentile without integration into linguistic practices, suggesting social grounding for norm prediction may not require the gradual acquisition process this note describes
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
llm social grounding increases as llms are integrated into human linguistic practices