Language Understanding and Pragmatics

Can formal argumentation make AI decisions truly contestable?

Explores whether structuring AI decisions as formal argument graphs (with explicit attacks and defenses) enables users to meaningfully challenge and navigate reasoning in ways unstructured LLM outputs cannot.

Note · 2026-02-21 · sourced from Argumentation
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Standard LLM outputs produce conclusions — sometimes with reasoning attached, sometimes not — but in a form that is not structured for contestation. A user who disagrees with a conclusion can ask for clarification, but they cannot navigate the reasoning structure to identify exactly which premise they reject or which argument they believe is defeated by a counterargument.

Argumentative LLMs apply formal argumentation theory — specifically Dung's abstract argumentation framework — to structure AI outputs. In Dung's framework, arguments attack each other; an argument is "accepted" if it is not defeated by an undefeated attacker. The framework is a directed graph of attack relations, and the "winner" is the argument that survives all attacks.

When an LLM's decision process is structured as a Dung argumentation framework, the output is not just a conclusion but a graph: these arguments support this conclusion, these counterarguments attack those supporting arguments, these rebuttals defend the original arguments. A user can inspect the graph, identify the specific argument they contest, and challenge it — the framework tells them exactly what would change the conclusion.

This is genuinely contestable AI output in a way that standard LLM reasoning is not. Can we measure how deeply models represent political ideology? and Does high refusal rate indicate ethical caution or shallow understanding? both point at the absence of navigable structure in LLM political/ideological reasoning. Formal argumentation provides that structure.

The connection to Do language models actually use their encoded knowledge? is direct: standard LLM outputs may not reflect the reasoning that produced them. Forcing the reasoning into a formal argumentation structure requires the model to generate the argument graph that justifies the conclusion — making it harder to produce outputs whose reasoning cannot be reconstructed.

The limitation: formal argumentation requires the argument space to be enumerable and structured, which works for some domains (medical diagnosis, policy analysis) but not for open-ended creative or subjective tasks.

Extension to opinion domains via Key Point Hierarchies: KPH (Key Point Hierarchies) applies entailment-graph structure to opinion summarization. Key points extracted from reviews are organized by specificity into a hierarchy — users quickly grasp high-level themes (the hotel is beautiful, great service) then drill down to fine-grained insights (check-in was quick and easy). Same-meaning key points cluster into single nodes, reducing redundancy. This extends formal argumentation's navigability principle from logical argument structure to opinion structure: flat lists of key points are hard to consume, but hierarchical entailment structure makes them tractable for navigation and sense-making.

The Social Transparency (ST) perspective extends this further: even when algorithm-level explainability is achieved, it may be insufficient. Most consequential AI systems are embedded in socio-organizational tapestries where groups of humans interact with the system. "If the boundary is traced along the bounds of an algorithm, we risk excluding the human and social factors that significantly impact the way people make sense of a system." Two identified pitfalls — Solutionism (always seeking technical solutions) and Formalism (seeking abstract mathematical solutions) — are deeply embedded in AI research and widen the gap between algorithmic and social explanation. Formal argumentation addresses the algorithm boundary; Social Transparency addresses the socio-organizational boundary beyond it.

Engineering design as argumentative discourse: The product development process is inherently argumentative — solving engineering problems requires discourse where experiments, calculations, and simulations inform reasoning but do not replace it. Representing this argumentative discourse as a digital artifact would: (1) improve documentation (archiving reasoning, not just CAD files), (2) make past design decisions traceable, (3) improve collaborative design, (4) enable machine participation in the reasoning process. LLMs embedded in a predefined causal framework ("querying a language model becomes a computational primitive") produce interpretable reasoning traces with reduced hallucination — formal argumentation structure provides guardrails that natural language reasoning lacks.


Source: Argumentation, Design Frameworks

Related concepts in this collection

Concept map
18 direct connections · 174 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

structured formal argumentation frameworks make ai decisions explainable and contestable in ways that unstructured llm outputs cannot