Does rational cooperation actually describe how AI communication works?
Gricean models assume good-faith rational agents coordinating meaning. But do AI systems designed to persuade—using credibility, emotion, and non-rational appeals—really operate under these assumptions? What happens when we drop the rationality premise?
The Rhetorical XAI paper makes a theoretical move that matters beyond XAI. It notes that Grice's maxims assume "people engaged in communicative interaction will do their best to get their message across, and in doing so will abide by a number of conversational conventions." In practice, communication often departs from these ideals. Rhetoric foregrounds what pragmatics idealizes away — credibility (ethos), affect (pathos), and non-rational influence — and treats them as constitutive of how communication actually works rather than as failure modes to be corrected. Pragmatic models of HCI communication, built on cooperative assumptions, cannot capture systems whose interfaces are designed to persuade.
This is a foundational point for any communication-centric account of AI. Pragmatic models treat language as a coordination instrument among rational agents trying to share understanding. Rhetoric treats language as a strategic instrument among situated agents trying to bring about adoption, change, action — and grants that affect, credibility, and non-rational appeals are first-class mechanisms, not noise. The two pictures are not on a continuum; they make different claims about what communication is. Treating AI systems through Gricean lenses presumes a cooperative interlocutor where there is, at minimum, a designed artifact with adoption-shaped incentives.
This is a theoretical sibling to the quasi prefix fails for communicative states because communication is constitutively intersubjective — you cannot weaken communication you can only eliminate it — both insights argue that imported philosophical frames (cooperative pragmatics, qualified mental-state language) miscarry when applied to AI communication because the underlying constitutive assumptions don't hold. And it is in productive tension with Does chain-of-thought reasoning reflect genuine thinking or performance?: the performative-CoT result shows that even within an apparently logical artifact (chain-of-thought), the rhetorical/performative dimension dominates on easy cases. Logos and pathos do not separate cleanly; performance bleeds into reasoning even at the token level.
For the Conversation Glossary project, this is foundational vocabulary for the tension between Habermas's ideal speech and Goffman/Bakhtin's situated communication. Language as event is rhetorical, not propositional, and AI systems live in event-time.
Source: Human Centered Design Paper: Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design
Related concepts in this collection
-
Does chain-of-thought reasoning reflect genuine thinking or performance?
When language models generate step-by-step reasoning, are they actually thinking through problems or just producing text that looks like reasoning? This matters for understanding whether extended reasoning tokens add real computational value.
productive tension; rhetorical/performative dimension dominates even within ostensibly logical artifacts
-
What if XAI is fundamentally a communication problem?
Does explanation effectiveness depend on who delivers it, how it's framed, and who uses it? This challenges the dominant technical view that treats explanations as context-independent outputs.
sibling; the rhetorical situation is what idealized-rationality models cannot represent
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
rhetoric breaks the idealized-rationality assumption baked into Gricean and pragmatic models of HCI communication