Thinking Assistants: LLM-Based Conversational Assistants that Help Users Think By Asking rather than Answering

Paper · arXiv 2312.06024 · Published December 10, 2023
Decision SupportQuestion Answer SearchPersonas PersonalityConversation Agents

complex tasks like research and strategic thinking often benefit from a more comprehensive approach to augmenting the thinking process rather than passively getting information. We introduce the concept of “Thinking Assistants”, a new genre of assistants that help users improve decision-making with a combination of asking reflection questions based on expert knowledge. Through our lab study (N=80), these Large Language Model (LLM) based Thinking Assistants were better able to guide users to make important decisions, compared with conversational agents that only asked questions, provided advice, or neither.

Our work proposes directions for developing more effective LLM agents. Rather than adhering to the prevailing authoritative approach of generating definitive answers, LLM agents aimed at assisting with cognitive enhancement should prioritize fostering reflection. They should initially provide responses designed to prompt thoughtful consideration through inquiring, followed by offering advice only after gaining a deeper understanding of the user’s context and needs.

The key insight they leverage is that recent large-language models can engage in dialogue that encourages reflection, and also encode nuanced information about the world (so-called “world-knowledge”), but can be augmented with specific expertise.

HCI communities have investigated various systems to support self-reflection through different interventions. These approaches include provoking thoughts about oneself by introducing different personas and having conversations with persona-driven chatbots [30], encouraging users to think from different perspectives [69], collecting personal information to gain insights [67], and sharing this information with others [59]. LLM-powered chatbots have emerged as a promising avenue for self-discovery, equipped with capabilities to provide interactive support, such as talking human-like (sharing experience or opinion on topics) and remembering essential user information [23, 30]. While authentic interactions with chatbots allow users to feel more connected, they also increase the risk of over-reliance and frustration due to the inherent gap between human communication and chatbot capabilities [23, 35], as chatbots can not fully grasp the nuances of human interactions [34]. Instead of focusing on imitating the tone of experts, our approach leverages the power of LLMs to perform reflective inquiry, aiding in self-discovery and promoting reflective thinking through questions that experts might ask.