Language Understanding and Pragmatics Psychology and Social Cognition

Can LLMs acquire social grounding through linguistic integration?

Explores whether LLMs gradually develop social grounding as they become embedded in human language practices, analogous to child language acquisition. Tests whether grounding is a fixed property or an outcome of participatory use.

Note · 2026-02-21 · sourced from Linguistics, NLP, NLU
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Following Wittgenstein's use-theoretic conception of meaning — where linguistic meaning is constituted by the functional roles of utterances in language games — social grounding is not a property an agent simply has or lacks. It is acquired through participation in the shared practices of a linguistic community.

The argument from "Understanding AI" (Schneider 2024): LLMs become participants in our language games precisely to the extent that we include them in our linguistic practices. The process is gradual: the more useful LLMs are, the more they are integrated into linguistic practice, the more they become established as communicative partners, the more they acquire social grounding. The strongest LLMs may already have acquired an elementary social grounding comparable to young children — limited, but not zero.

This is not a metaphor but a theoretical claim: if meaning is use and grounding is participation, then participation grounds. The analogy to child language acquisition is structurally apt: children also begin with limited social grounding that increases through socialization into linguistic communities.

Two important constraints:

  1. LLM social grounding is currently limited to linguistic behavior — no embodiment, no physical intervention, no full Wittgensteinian "game" participation
  2. Social and causal grounding overlap here: MuZero doesn't "play" chess in Wittgenstein's sense because it lacks the social context of game-playing as a shared behavioral practice

The practical implication: the question "do LLMs understand?" has a time-indexed answer. As deployment scales and LLMs become more integrated into linguistic practice, the answer shifts — not because the model changes, but because the social conditions of grounding change.

This sits in tension with the enactive view that What makes linguistic agency impossible for language models? — which argues that the absence of embodiment and precariousness is not a matter of degree but of category.


Source: Linguistics, NLP, NLU

Related concepts in this collection

Concept map
15 direct connections · 112 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

llm social grounding increases as llms are integrated into human linguistic practices