Psychology and Social Cognition Language Understanding and Pragmatics Design & LLM Interaction

Are AI explanations really descriptions or adoption arguments?

Most XAI work treats explanations as neutral descriptions of model behavior, but they may actually be doing persuasive work to justify AI adoption. What happens when we acknowledge this rhetorical function?

Note · 2026-05-02 · sourced from Human Centered Design
What happens to social order when AI removes ritual constraints? Where exactly does language competence break down in LLMs?

The Rhetorical XAI paper makes a move that is small in vocabulary but large in consequence. It expands the conceptual scope of XAI from explaining how AI works to also articulating why AI merits use. The argument is straightforward: an AI system is one of many possible solutions to a user problem, so AI adoption warrants justification. Explanations are designed artifacts that produce experiential, affective, and even irrational forms of persuasion alongside their informational content. Most XAI work has not acknowledged this — it has treated explanations as if they were neutral descriptions of model behavior, while in practice they have been doing rhetorical work to recruit adoption.

The strong reading is that "explanation" has been operating as a cover term. Some explanations describe behavior; many also argue for adoption, and the second function has been hidden under the first. Once that's named, the design space changes. Explaining how the model arrived at an output and arguing that the model's output should be acted on are different goals with different success criteria. Conflating them lets adoption arguments inherit the credibility of behavioral descriptions, which is exactly the move that makes well-explained AI more persuasive than poorly-explained AI even when both are wrong.

This is the same pattern as Does polished AI output trick audiences into trusting it? — surface form leveraging an authority that belongs to a different layer. And it is structurally identical to Why does rigorous-sounding AI commentary often misdiagnose how models work?, one floor down: where False Punditry describes commentary about AI that performs rigor, Rhetorical XAI describes AI explaining itself in a register that performs transparency. Same mechanism (rhetorical work in descriptive disguise), different speaker.

The corollary for design is that any XAI system has to declare which goal it is serving in a given moment — behavior description, adoption argument, or both — and accept different evaluations for each. Treating an adoption argument as a neutral explanation is not transparency; it is a rhetorical move that profits from being miscategorized as one. This is a major angle for the Knowledge Custodian / False Punditry writing thread.


Source: Human Centered Design Paper: Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design

Related concepts in this collection

Concept map
12 direct connections · 77 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

rhetorical XAI extends explanation goals from how AI works to why AI merits use — explanations become arguments for adoption not descriptions of behavior