What if XAI is fundamentally a communication problem?
Does explanation effectiveness depend on who delivers it, how it's framed, and who uses it? This challenges the dominant technical view that treats explanations as context-independent outputs.
The Rhetorical XAI paper makes the strong claim that XAI is not solely a technical problem of producing faithful rationales — it is a communication problem because explanations are situated messages whose interpretation is mediated by who presents them, how they are framed, and who must act on them. Different stakeholders use the same explanation for different goals: developers debug, ethicists assess accountability, end-users decide whether to trust an output for a specific task. The same artifact takes on different meanings across these positions, so effectiveness is not intrinsic to the explanation. It is a property of the triad — source, framing, recipient — and any evaluation that holds the recipient role constant or implicit is measuring something narrower than what the explanation actually does in deployment.
The reframing matters because the dominant XAI program treats explanation as a faithful-rationale problem and evaluates with proxies (preference, comprehension on a fixed task) that bake in a single recipient role. The communication framing forces the field to specify the rhetorical situation each explanation is built for, rather than treating "explanation" as a noun that can be optimized in the abstract. This is a Lasswell/Jakobson shift — explanation as communicative act with sender, channel, message, receiver, and code, not as interpretability output emitted from a model. Aligned with the Conversation Glossary direction: communication-centric POVs (Habermas, Goffman, Austin, Bakhtin) all start from situated messages, and rhetorical XAI is a way of importing that frame into the AI explainability literature.
This extends What makes explanations work in real conversation? from the dialogue layer up to the broader rhetorical situation: Madumal et al.'s three dimensions are the fine-grained instance of the larger source-framing-recipient claim, applied within a turn-by-turn explanatory exchange. It also runs parallel to How does AI writing escape the conversations that govern knowledge? — both insights argue that decoupling knowledge artifacts from the social processes that constitute their meaning produces an artifact that performs adequacy without delivering it. Stripping the rhetorical situation out of XAI leaves a faithful rationale that is not, for any actual recipient, an explanation.
Source: Human Centered Design Paper: Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design
Related concepts in this collection
-
What makes explanations work in real conversation?
Does explanation quality depend on how dialogue partners interact—testing understanding, adjusting based on feedback, and coordinating their communicative moves—rather than just information content alone?
extends; three-dimension dialogical analysis is a fine-grained instance of the broader rhetorical-situation claim
-
How does AI writing escape the conversations that govern knowledge?
If knowledge claims normally get filtered and refined through social discourse, what happens when AI generates claims outside that governing process? Why does scale matter here?
parallel reframing; both move from artifact-centric to situation-centric evaluation
-
Why does rigorous-sounding AI commentary often misdiagnose how models work?
Expert commentary on AI frequently cites real research and sounds carefully reasoned, yet reaches conclusions built on unwarranted cognitive attributions. What makes this pattern so persistent in AI analysis?
same phenomenon at adjacent layer
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
XAI is a communication problem not a transparency problem — explanations are situated messages whose meaning depends on source, framing, and recipient role