Rhetorical XAI: Explaining AI’s Benefits as well as its Use via Rhetorical Design
Modern AI systems are notoriously opaque, limiting efforts to understand or audit their behaviors [42, 188]. In response, Explainable Artificial Intelligence (XAI) aims to foster trust and accountability in AI systems by making their decision-making processes more transparent [36, 121]. XAI encompasses techniques to support human understanding of both local predictions (individual decisions) and global behaviors [39, 43, 138, 162].
However, XAI is not solely a technical challenge of producing faithful rationales of model behavior. XAI is also a communication problem because explanations are situated messages whose interpretation is mediated by who presents them, how they are framed, and who must act on them. In practice, different stakeholders leverage explanations to pursue distinct goals [113, 132, 162, 181, 217]. Developers primarily engage with explanations authored for technical audiences to debug model errors and biases [114, 163]. Ethicists and policymakers rely on explanations framed to justify system behavior when assessing AI safety and accountability [36]. End-users draw on explanations to decide how to appropriately leverage AI to improve task performance and efficiency [138, 221, 244]. Together, these cases illustrate that explanation effectiveness is not intrinsic, but depends on the interplay between the explanation’s source, its framing, and the recipient’s role and task context.
In this article, we expand the conceptual scope of XAI beyond explaining how AI works towards also articulating why AI merits use. An AI system is just one of many possible solutions to a user problem, so AI adoption warrants justification. Through this lens, AI explanations can serve a rhetorical function by communicating why an AI system is beneficial, credible, and appropriate in a given context. From this perspective, explanations are not technical specifications but designed artifacts. They produce diverse rhetorical effects (e.g., experiential, affective, and even irrational forms of persuasion) that are not fully captured by traditional XAI design goals focused primarily on explaining how AI works.
To expand the dominant technical understanding of XAI and situate it within a social perspective, we propose Rhetorical XAI, an analytical framework rooted in Rhetorical Design that extends XAI beyond explaining how AI works to also account for why AI systems merit use. Rhetorical XAI characterizes explanation design through three rhetorical appeals: (1) logos, which aligns technical logic with human reasoning through visual and textual abstractions; (2) ethos, which establishes contextual credibility based on the explanation source and its appropriateness to the decision task; and (3) pathos, which engages user emotionally by framing explanations around their motivations, expectations, or situated needs during interaction. Using this framework, we synthesize prior XAI design strategies along three rhetorical dimensions and across the two explanatory goals. In doing so, we situate Rhetorical XAI within critical discourse on persuasive technologies, foregrounding questions of (in)appropriate explanation use in social and collaborative contexts.
The remainder of this article is structured as follows. Section 2 situates XAI as a communicative problem by reviewing relevant communication theories in HCI and CSCW, motivating the inclusion of rhetorical perspectives for explanation design. Section 3 outlines our narrative review methodology, detailing the procedures for literature selection, coding, and synthesis to support analytical transparency. Section 4 reviews existing XAI design paradigms and reveals a prevailing focus on explanations as informative tools for understanding how AI works, rather than as rhetorically designed artifacts that justify why AI merits use. To address this limitation, Section 5 introduces the Rhetorical XAI framework (Table 1), which characterizes explanation design along three rhetorical dimensions: logical reasoning (logos), credibility (ethos), and emotional resonance (pathos). Section 6 presents our synthesis of prior XAI design strategies across different rhetorical appeals and explanatory goals (Table 2). Finally, Section 7 critically examines the benefits and risks of identified rhetorical strategies, outlining implications and open challenges for responsible AI adoption in social and collaborative contexts.
2.1 XAI as a Communication Problem
A foundational formulation of XAI as a communicative problem was offered by Miller [157], who situated XAI in the cognitive and social processes through which humans naturally constructed explanations. Drawing on Peirce’s theory of abduction [179], Miller argued that contrastive explanations (“why P rather than Q?”) best support human causal inference because they reflected abductive reasoning, which favored the most plausible explanation among alternatives. Building on the linguistic tradition of pragmatics, Miller further incorporated Grice’s maxims [103] and conversational models [7, 111, 225] to frame AI explanations as cooperative dialogues for shared understanding. This perspective has substantially influenced subsequent XAI research, including technical methods for generating contrastive explanations [122, 125], empirical evaluations of their efficacy [37, 220], and designs of conversational explanatory systems [97, 139].
Although Miller framed these theoretical relations as the “social sciences” of XAI, his account primarily conceptualized explanation as a form of logical discourse that emphasized causal coherence and linguistic pragmatics. In contrast, scholars in human–computer interaction (HCI) and computer-supported cooperative work (CSCW) have adopted broader communication perspectives that situated AI explanations within social, organizational, and collaborative contexts. For example, drawing on Luhmann [146]’s multi-layer cybernetics, Keenan and Sokol [127] argued that the meaning of AI explanations emerged from complex N-order interpretations within social groups, rather than from simple dyadic human–AI dialogue. Ehsan et al. [73] built on Erickson and Kellogg [82]’s social translucence to reveal organizational implications of AI explanations grounded in sociocultural communication practices [110, 175].
Communication scholars complemented these perspectives by highlighting an imbalance in how XAI research attended to different elements of the communication process. For example, Xu and Shi [239] observed that most HCI work on XAI adopted a predominantly receiver-focused perspective, emphasizing how user needs, human factors, and organizational contexts affected explanation effectiveness. In contrast, they noted that explanation sources were commonly discussed only at the dataset level, leaving the roles of system developers, designers, and deploying organizations largely unaddressed.
In summary, viewing XAI as a communicative problem positions explanations as designed artifacts whose interpretations arise from the interplay between explanation sources, explanatory formats, and target recipients within social contexts [73, 157, 239].
2.2 The Communication Root in HCI, CSCW, and Their Rhetorical Standpoints
Building on the view of XAI as a communicative problem, this section situates rhetoric within the broader communication traditions that have shaped HCI and CSCW. While prior work has drawn on semiotic, pragmatic, and sociocultural theories to examine how meaning is interactively encoded and negotiated, rhetoric foregrounds the deliberate design of communicative forms to persuade, justify, and establish credibility. We draw on this rhetorical lineage in HCI to position AI explanations as designed artifacts whose form, framing, and presentation influence not only how systems are understood, but also how they are evaluated and adopted in practice.
2.2.1 The Semiotic and Pragmatic Focus in HCI. Barbosa and Pereira [12] traced the historical role of language and communication in HCI primarily through the lenses of semiotics and pragmatics. Semiotics attends to how meaning is encoded in static representational elements such as buttons, text, and images, while pragmatics extends this focus to how meaning is constructed through users’ actions and interaction. For example, Grice’s maxims [103] offer a pragmatic framework based on quantity, quality, relation, and manner for designing and evaluating conversational behavior. The Speech Act theory [194, 236] builds on this pragmatic view by treating language as performative, enabling utterances to function as actions that trigger system behavior rather than merely convey descriptive information. Dubberly et al. [69] drew on cybernetics to conceptualize interaction as an ongoing conversational system, distinguishing between reactive, self-regulating, and learning behaviors that emerge through feedback over time.
2.2.2 The Sociocultural and Critical Focus in CSCW. CSCW researchers focus on the technological challenges that arise in collaborative work environments, particularly those involving interdependent activities, temporal coordination, and organizational structures [2, 203]. In these settings, pragmatic theories might not fully account for the situated and evolving nature of real-world collaborative work. For example, Bowers and Churcher [23] shown that while classic Speech Act theory treated interaction as a sequence of discrete, well-defined utterances, real-world coordination was organized around shared activity contexts and situational cues that rendered turn-taking fluid and often implicit. Similarly, Suchman [208] argued that categorizing interaction into predefined speech utterances (e.g., requests, commitments, promises) imposed rigid control structures that failed to accommodate the contingent and adaptive nature of work.
In response to these socio-technical gaps [2], CSCW scholars have increasingly drawn on sociocultural and critical communication theories to better account for how meaning is encoded and negotiated in practice. Examples include Orlikowski and Gash [175]’s concept of technological frames, which built on organizational sensemaking theory [235] to examine how different social groups interpreted technologies, as well as Biocca et al. [19]’s application of social presence theory to understand user satisfaction in online media contexts.
2.2.3 Rhetorical Theories from Language to HCI. Building on the above communication paradigms, we turn to rhetoric as another HCI tradition that shifts attention from interpretation and coordination to persuasion and justification.
Rhetoric foregrounds persuasion. As Craig [60] explained, among seven communication traditions, Semiotics examined “intersubjective mediation by signs” and Phenomenology focused on “dialogue experience[s]” alongside four additional traditions, including Cybernetic, Sociopsychological, Sociocultural, and Critical. In contrast, Rhetoric centered on “how the artful use of discourse [serves] to persuade audiences.” Aristotle first defined rhetoric as “the faculty of observing in any given case the available means of persuasion” [8]. He stated that an argument persuades not only through logical reasoning (logos) but also through the speaker’s credibility (ethos) and the emotional influence exerted on the audience (pathos). For example, a genuine diamond ring (strong logos) might be perceived as fake if presented by someone in severe financial hardship (weak ethos). Similarly, straightforward truths (strong logos) might fail to convince a child who is crying and having a tantrum (weak pathos).
Traditionally focused on discourse, rhetoric has been adopted in related domains such as advertising [35], narrative fiction [21], social media [46], and more recently in HCI. Here, Boyarski and Buchanan [24] framed interaction design as persuasive discourse, where design choices shape system understanding and use through logical structure, implied voice, and affective elements. Similarly, Buchanan [29] viewed electronic products as rhetorical artifacts designed to attract users through technological reasoning (logos), manufacturer credibility (ethos), and emotional resonance (pathos). Beyond artifacts, Brummett [27] conceptualized machines as rhetorical social agents that participated in meaning-making through functional utility (logos), aesthetic and material qualities (ethos), and cultural resonance (pathos). Notably, rhetoric challenges assumptions of idealized rationality that often underpin pragmatic theories (Section 2.2.1). As Barbosa and Pereira [12] noted, Grice’s maxims [103] assumed “people engaged in communicative interaction will do their best to get their message across, and in doing so will abide by a number of conversational conventions, or maxims.” However, in practice, communication often departs from these ideals. While rhetoric incorporates rational argumentation as one persuasive strategy (e.g., logos), it also highlights additional appeals such as ethos and pathos, which account for affective, situational, and non-rational influences inherent in social interactions. Accordingly, rhetoric offers a valuable lens for scrutinizing HCI designs in practical settings where user behavior cannot be assumed to follow idealized models of rationality.
2.2.4 Rhetorical Design. HCI practitioners have developed concrete design strategies to enact different rhetorical appeals. In web interface design, for example, emotional engagement (pathos) is facilitated by visually prominent elements such as strong contrast, surprising details, and aesthetic signals. Credibility (ethos) is established through reassurances and forms of social proof, while logical appeal (logos) is conveyed through clear and verifiable information [119, 201, 202]. Similarly, information visualization employs linguistic devices (e.g., irony, analogy) alongside visual techniques (e.g., metaphor, contrast, categorization) to influence perceptions of trustworthiness, emotional resonance, and logical coherence [118]. Related work on infographics also leverages rhetorical strategies to communicate complex social and medical topics, including recycling and sustainability [117], public health statistics [154], and civic participation [56, 71]. On the other hand, HCI researchers have examined how rhetorical techniques can be applied in harmful ways, resulting in coercion or “dark patterns” that actively deceive users for the benefit of other parties. For instance, Gray et al. [101] illustrated how interface designs can deliberately exploit user cognitive and emotional vulnerabilities to steer them toward unintended decisions, a concern that has been extended to XAI by Chromik et al. [54].
2.2.5 Advocate for Rhetorical XAI. Building on the view of XAI as a communicative problem, this article adopts a rhetorical perspective that conceptualizes AI explanations as purposively designed artifacts rather than neutral descriptors of model behavior. This rhetorical lens thus extends existing HCI rhetoric traditions into XAI by shifting attention from explaining how AI systems work to also examining how explanations persuade users that AI systems merit use. By doing so, this rhetorical perspective makes explicit the experiential, affective, and non-ideal influences on users that are often overlooked by need-driven XAI design goals centered on technical understanding.
Many existing XAI studies already report, either explicitly or implicitly, appeals that can be understood through a rhetorical lens, including work on persuasive AI advice adoption [68], experiential explanation design [28, 81], tangible and sensory explanation experiences [5, 98], as well as dark patterns associated with explanation interfaces [54]. However, these contributions remain fragmented across domains and lack a unifying account that theoretically characterizes design strategies alongside different rhetorical dimensions and explanatory goals. To address this gap, this article introduces a rhetorical framework (Section 5) that synthesizes existing explanation design strategies from prior XAI research (Section 6).
We have proposed conceptualizing AI explanations as rhetorical artifacts, designed not only to support user understanding of AI systems but also to promote appropriate AI adoption as another dimension of responsible AI. Drawing on the concept of rhetorical design, we developed a rhetorical framework that guided our coding process for a narrative literature review. This review synthesized a range of design strategies used in prior XAI studies to make explanations logically convincing, credibly appealing, and emotionally engaging. By critically examining the strengths and limitations of these strategies, our work extends the design space of XAI by providing designers with new tools to address the complex needs of diverse stakeholders and the broader sociotechnical challenges involved in building XAI systems.