AInsight: Augmenting Expert Decision-Making with On-the-Fly Insights Grounded in Historical Data
In decision-making conversations, experts must navigate complex choices and make on-the-spot decisions while engaged in conversation. Although extensive historical data often exists, the real-time nature of these scenarios makes it infeasible for decision-makers to review and leverage relevant information. This raises an interesting question: What if experts could utilize relevant past data in realtime decision-making through insights derived from past data? To explore this, we implemented a conversational user interface, taking doctor-patient interactions as an example use case. Our system continuously listens to the conversation, identifies patient problems and doctor-suggested solutions, and retrieves related data from an embedded dataset, generating concise insights using a pipeline built around a retrieval-based Large Language Model (LLM) agent. We evaluated the prototype by embedding Health Canada datasets into a vector database and conducting simulated studies using sample doctor-patient dialogues, showing effectiveness but also challenges, setting directions for the next steps of our work.
5.1 Contributions and Design Implications
In this work, we designed a system to assist experts by providing on-the-fly insights, enabling more informed decisions within limited timeframes. This work contributes to the growing body of research on AI-assisted decision-making through three key contributions. First, our design prioritizes human agency by positioning the system as an augmenting tool, maintaining agency throughout the process, and eventually leaving final decisions to the expert.
This emphasis on human control is especially important in domains where decisions carry long-lasting consequences, ensuring experts maintain authority while leveraging AI support. Second, the system emphasizes transparency by grounding generated insights in a knowledge base provided by the expert, fostering trust in the insights and reducing concerns about the unclear origins of LLM-generated responses and occasional hallucinations. Finally, recognizing the real-time constraints of conversational decisionmaking, our system focuses on presenting supporting information and insights succinctly through a conversational user interface designed for minimal interaction, requiring only navigation between insights
5.2 Challenges and FutureWork
During the evaluation of our system, we encountered some challenges, with the first set being concerned with the quality of the generated insights and their backing knowledge base.We found that the effectiveness of the system is largely dependent on the relevance of the indexed data and if the collected knowledge base consists of noisy and irrelevant data, the system may start generating misleading insights which can negatively affect the decision-making process. Therefore, it is important to have high-quality, domain-relevant knowledge bases while ensuring the insight generation module is robust enough to handle potential inconsistencies or noise within the source data. Furthermore, we also encountered challenges regarding the presentation of information and insights. We observed during our simulated studies that even though concise, it still can be distracting for the user to look at the newly generated insights while following the conversation.