Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

Paper · arXiv 2402.00367 · Published February 1, 2024
Agents Multi

In this work, we study approaches to identify LLM knowledge gaps and abstain from answering questions when knowledge gaps are present. We first adapt existing approaches to model calibration or adaptation through fine-tuning/prompting and analyze their ability to abstain from generating low-confidence outputs. Motivated by their failures in self-reflection and over-reliance on held out sets, we propose two novel approaches that are based on model collaboration, i.e., LLMs probing other LLMs for knowledge gaps, either cooperatively or competitively. Extensive experiments with three LLMs on four QA tasks featuring diverse knowledge domains demonstrate that both cooperative and competitive approaches to unveiling LLM knowledge gaps achieve up to 19.3% improvements on abstain accuracy against the strongest baseline

Consequently, we posit that abstaining from generating low-confidence outputs should be a part of LLMs’ functionality, and ask a crucial research question: how to identify knowledge gaps in LLMs? Developing and evaluating robust mechanisms to address the abstain problem improves LLM reliability, reduces hallucinations, and mitigates biases due to model uncertainty.