Attentive Reasoning Queries: A Systematic Method for Optimizing Instruction-Following in Large Language Models
We present Attentive Reasoning Queries (ARQs), a novel structured reasoning approach that significantly improves instruction-following in Large Language Models through domain-specialized reasoning blueprints. While LLMs demonstrate remarkable capabilities across diverse tasks, they often fail to maintain adherence to complex, use-case-specific instructions during multi-turn conversations, presenting challenges for business-critical applications. ARQs address this limitation by guiding LLMs through systematic reasoning steps with targeted queries that reinstate critical instructions and facilitate intermediate reasoning throughout the completion process. In extensive testing within Parlant, our framework for reliable customer-facing agents in which ARQs were born out of necessity, they achieved a 90.2% success rate across 87 test scenarios, outperforming both Chain-of-Thought reasoning (86.1%) and direct response generation (81.5%).
In this paper, we introduce Attentive Reasoning Queries (ARQs), a structured approach to guide LLMs through systematic reasoning steps using targeted, taskspecific queries. ARQs leverage domain knowledge to redirect the model’s attention to critical instructions, decisions, and potential pitfalls at the points where such attention is most crucial. This approach serves two key functions: (1) Reinstating important instructions, (2) Facilitating intermediate reasoning steps. These functions are particularly instrumental in complex and nuanced conversational contexts in which adherence to specific instructions is essential.