Are AI explanations really descriptions or adoption arguments?
Most XAI work treats explanations as neutral descriptions of model behavior, but they may actually be doing persuasive work to justify AI adoption. What happens when we acknowledge this rhetorical function?
The Rhetorical XAI paper makes a move that is small in vocabulary but large in consequence. It expands the conceptual scope of XAI from explaining how AI works to also articulating why AI merits use. The argument is straightforward: an AI system is one of many possible solutions to a user problem, so AI adoption warrants justification. Explanations are designed artifacts that produce experiential, affective, and even irrational forms of persuasion alongside their informational content. Most XAI work has not acknowledged this — it has treated explanations as if they were neutral descriptions of model behavior, while in practice they have been doing rhetorical work to recruit adoption.
The strong reading is that "explanation" has been operating as a cover term. Some explanations describe behavior; many also argue for adoption, and the second function has been hidden under the first. Once that's named, the design space changes. Explaining how the model arrived at an output and arguing that the model's output should be acted on are different goals with different success criteria. Conflating them lets adoption arguments inherit the credibility of behavioral descriptions, which is exactly the move that makes well-explained AI more persuasive than poorly-explained AI even when both are wrong.
This is the same pattern as Does polished AI output trick audiences into trusting it? — surface form leveraging an authority that belongs to a different layer. And it is structurally identical to Why does rigorous-sounding AI commentary often misdiagnose how models work?, one floor down: where False Punditry describes commentary about AI that performs rigor, Rhetorical XAI describes AI explaining itself in a register that performs transparency. Same mechanism (rhetorical work in descriptive disguise), different speaker.
The corollary for design is that any XAI system has to declare which goal it is serving in a given moment — behavior description, adoption argument, or both — and accept different evaluations for each. Treating an adoption argument as a neutral explanation is not transparency; it is a rhetorical move that profits from being miscategorized as one. This is a major angle for the Knowledge Custodian / False Punditry writing thread.
Source: Human Centered Design Paper: Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design
Related concepts in this collection
-
Does polished AI output trick audiences into trusting it?
When AI generates professional-looking graphs, diagrams, and presentations, do audiences mistake visual polish for analytical depth? This matters because appearance might substitute for actual expertise.
cautionary parallel; surface form borrowing authority from a different layer
-
Why does rigorous-sounding AI commentary often misdiagnose how models work?
Expert commentary on AI frequently cites real research and sounds carefully reasoned, yet reaches conclusions built on unwarranted cognitive attributions. What makes this pattern so persistent in AI analysis?
sibling phenomenon at adjacent layer
-
What if XAI is fundamentally a communication problem?
Does explanation effectiveness depend on who delivers it, how it's framed, and who uses it? This challenges the dominant technical view that treats explanations as context-independent outputs.
sibling insight; the rhetorical-situation reframe is what makes the adoption-argument function visible
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
rhetorical XAI extends explanation goals from how AI works to why AI merits use — explanations become arguments for adoption not descriptions of behavior