How do logos, ethos, and pathos shape AI explanations?
Do the three classical rhetorical appeals—logical alignment, source credibility, and emotional framing—operate simultaneously in how we explain AI systems to users? And can naming these channels help designers make intentional rhetorical choices?
The Rhetorical XAI framework characterizes explanation design through three rhetorical appeals: logos (alignment of technical logic with human reasoning through visual and textual abstractions), ethos (contextual credibility based on the explanation source and its appropriateness to the decision task), and pathos (emotional engagement framed around motivations, expectations, and situated needs). Crossed with the two explanatory goals — how AI works, why AI merits use — this produces a 3×2 design space that synthesizes prior fragmented XAI work. The synthesis is the contribution. Existing XAI strategies have been deploying logos/ethos/pathos channels for years without naming them, which means the field has been making rhetorical choices without rhetorical theory.
The taxonomy is useful because it makes the loading visible. Every explanation choice loads all three channels simultaneously, whether the designer intends to or not. A confidence score is logos in foreground but ethos in background (the model claiming to know how sure it is). A natural-language rationale is logos in surface but pathos in framing (relatable language signals that the system understands the user's situation). An explanation provenance — "this explanation was generated by the model that made the prediction" versus "by an independent auditor" — is pure ethos. Untheorized, these loadings happen by accident and the design team cannot account for what their explanation is actually doing to users. The three-appeals frame turns these into design parameters with names.
This complements Why are presuppositions more persuasive than direct assertions?: presuppositional persuasion is one specific mechanism within the broader logos/pathos space — content that is structurally hard to challenge because it is encoded as background. Aristotle's taxonomy gives that mechanism a coordinate in a wider design space, and it gives the Does personalization in AI increase trust or manipulation risk? thread a vocabulary for AI persuasion mechanics that is not specific to recommendations or advice — it applies to any AI artifact that wants the user to act on its output. Across personalization research, the missing variable has often been which appeal channel a system is loading; the three-appeals frame fills that vocabulary gap.
Source: Human Centered Design Paper: Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design
Related concepts in this collection
-
Why are presuppositions more persuasive than direct assertions?
Explores why presenting information as shared background rather than as a claim makes it more persuasive to audiences. This matters because it reveals how language structure itself can bypass critical evaluation.
complementary persuasion mechanism; specific instance within the broader logos/pathos space
-
What if XAI is fundamentally a communication problem?
Does explanation effectiveness depend on who delivers it, how it's framed, and who uses it? This challenges the dominant technical view that treats explanations as context-independent outputs.
sibling; the source-framing-recipient triad is where ethos lives operationally
-
Are AI explanations really descriptions or adoption arguments?
Most XAI work treats explanations as neutral descriptions of model behavior, but they may actually be doing persuasive work to justify AI adoption. What happens when we acknowledge this rhetorical function?
sibling; the two-goal axis of the 3×2 space
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
logos ethos pathos give XAI a persuasion taxonomy — explanations operate on logical alignment source credibility and emotional framing simultaneously