Language Understanding and Pragmatics Psychology and Social Cognition

Where does the meaning of an AI explanation actually come from?

Does a single user reading an explanation create its meaning, or does meaning emerge from the social layers surrounding that reading—colleagues' interpretations, organizational norms, public discourse?

Note · 2026-05-02 · sourced from Human Centered Design
What happens to social order when AI removes ritual constraints? Why do AI systems fail at social and cultural interpretation?

The Rhetorical XAI paper, citing Keenan and Sokol's reading of Luhmann's multi-layer cybernetics, argues that the meaning of AI explanations emerges from complex N-order interpretations within social groups, rather than from simple dyadic human–AI dialogue. The point is structural. A single user reading an explanation is not where its meaning is constituted. The user is reading inside a context — organizational role, professional norms, prior public discourse about the model, peer interpretations — and the meaning of the explanation is the joint product of these layered observations of observations. Ehsan and colleagues' work on social translucence makes the same point from the CSCW side: AI explanations live in sociocultural communication practices, not in moment-of-reading semantics.

The implication is that user studies that show participants explanations and ask them what they understood are measuring first-order interpretation in artificial isolation. In deployment, what an explanation means to a user depends on what their colleagues say about the system, what regulators have said about the model class, what the press cycle has reported, what the user's last bad experience with a similar system was. These are second- and third-order interpretations, and they constitute the meaning that actually governs adoption decisions. Strip them out and the explanation tested in the lab is not the explanation that operates in the world.

This is a Luhmann hub note for the What happens to social order when AI removes ritual constraints? thread and is directly relevant to the Meaning Gap knowledge graph work. It runs parallel to Do language models learn abstract grammar or cultural speech patterns? from the production side: meaning is socially situated whether it is being produced by a model or interpreted by a user. The same observation lands on both sides of the interface — AI systems generate inside cultural-situated patterns, and AI explanations are interpreted inside socially-layered observation chains. Treating either as if it were happening in a vacuum loses the constitutive context.

The design corollary is that XAI evaluation has to include the social layers it currently abstracts away — peer use, organizational sensemaking, public discourse — or accept that lab effectiveness will not predict deployed effectiveness.


Source: Human Centered Design Paper: Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design

Related concepts in this collection

Concept map
14 direct connections · 85 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

explanation meaning emerges from N-order interpretation in social groups not from dyadic human-AI dialogue