Where does the meaning of an AI explanation actually come from?
Does a single user reading an explanation create its meaning, or does meaning emerge from the social layers surrounding that reading—colleagues' interpretations, organizational norms, public discourse?
The Rhetorical XAI paper, citing Keenan and Sokol's reading of Luhmann's multi-layer cybernetics, argues that the meaning of AI explanations emerges from complex N-order interpretations within social groups, rather than from simple dyadic human–AI dialogue. The point is structural. A single user reading an explanation is not where its meaning is constituted. The user is reading inside a context — organizational role, professional norms, prior public discourse about the model, peer interpretations — and the meaning of the explanation is the joint product of these layered observations of observations. Ehsan and colleagues' work on social translucence makes the same point from the CSCW side: AI explanations live in sociocultural communication practices, not in moment-of-reading semantics.
The implication is that user studies that show participants explanations and ask them what they understood are measuring first-order interpretation in artificial isolation. In deployment, what an explanation means to a user depends on what their colleagues say about the system, what regulators have said about the model class, what the press cycle has reported, what the user's last bad experience with a similar system was. These are second- and third-order interpretations, and they constitute the meaning that actually governs adoption decisions. Strip them out and the explanation tested in the lab is not the explanation that operates in the world.
This is a Luhmann hub note for the What happens to social order when AI removes ritual constraints? thread and is directly relevant to the Meaning Gap knowledge graph work. It runs parallel to Do language models learn abstract grammar or cultural speech patterns? from the production side: meaning is socially situated whether it is being produced by a model or interpreted by a user. The same observation lands on both sides of the interface — AI systems generate inside cultural-situated patterns, and AI explanations are interpreted inside socially-layered observation chains. Treating either as if it were happening in a vacuum loses the constitutive context.
The design corollary is that XAI evaluation has to include the social layers it currently abstracts away — peer use, organizational sensemaking, public discourse — or accept that lab effectiveness will not predict deployed effectiveness.
Source: Human Centered Design Paper: Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design
Related concepts in this collection
-
Do language models learn abstract grammar or cultural speech patterns?
LLMs might learn more than grammar rules—they could be learning who says what to whom and when. This matters because it changes how we understand what biases and persona effects actually represent.
parallel; meaning is socially situated on both production and interpretation sides
-
What if XAI is fundamentally a communication problem?
Does explanation effectiveness depend on who delivers it, how it's framed, and who uses it? This challenges the dominant technical view that treats explanations as context-independent outputs.
sibling; N-order interpretation is the deeper structure of the source-framing-recipient triad
-
How does AI writing escape the conversations that govern knowledge?
If knowledge claims normally get filtered and refined through social discourse, what happens when AI generates claims outside that governing process? Why does scale matter here?
related; both insights name the cost of decoupling artifacts from the social processes that constitute their meaning
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
explanation meaning emerges from N-order interpretation in social groups not from dyadic human-AI dialogue