Language Understanding and Pragmatics LLM Reasoning and Architecture

Can critical questions improve how language models reason?

Does structuring prompts around argumentation theory's warrant-checking questions force language models to perform deeper reasoning rather than surface pattern matching? This matters because models might produce correct answers without actually reasoning correctly.

Note · 2026-02-21 · sourced from Argumentation
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

CQoT (Critical-Questions-of-Thought) adapts Toulmin's argument model into a prompting framework. Standard chain-of-thought prompting asks the model to reason step by step. CQoT additionally requires the model to answer specific critical questions about its own reasoning: What is the warrant connecting evidence to claim? What backing supports the warrant? What potential rebuttals exist? Does the claim need qualification?

These questions are not open-ended reflection requests. They are the specific interrogation targets from argumentation theory — the structural requirements that valid arguments must satisfy. By instantiating them as required prompting steps, CQoT converts implicit argumentative requirements into explicit reasoning constraints.

The improvement over standard CoT is consistent. Forcing warrant-checking catches the specific failure that Can LLMs identify the hidden assumptions that make arguments work? documents: models that correctly identify claim-data structure still fail at the implicit premise. CQoT makes the implicit premise an explicit required output.

The mechanism generalizes beyond argumentation tasks. Can models pass tests while missing the actual grammar? describes the broader problem: correct outputs do not prove structural learning. CQoT forces the structural reasoning into the surface output where it can be evaluated and — critically — where the model must perform it rather than skip it.

This is an instance of the broader principle that structured decomposition of implicit reasoning requirements improves LLM performance on tasks where those requirements would otherwise be skipped. The cognitive science parallel: experts who have internalized decision criteria can execute them fluently; forcing novices to answer structured questions makes explicit what experts do implicitly. CQoT structures the novice reasoning process.

The limitation: CQoT assumes the model can correctly identify what the warrant should be, once it is asked to. For domains where the warranting relationship is itself contested, the structured prompt provides the form of warrant-checking without guaranteeing the content.


Source: Argumentation

Related concepts in this collection

Concept map
17 direct connections · 177 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

applying argumentation scheme critical questions as structured prompts improves llm reasoning by forcing warrant checking