Self-reflective Uncertainties: Do LLMs Know Their Internal Answer Distribution?
To reveal when a large language model (LLM) is uncertain about a response, uncertainty quantification commonly produces percentage numbers along with the output. But is this all we can do? We argue that in the output space of LLMs, the space of strings, exist strings expressive enough to summarize the distribution over output strings the LLM deems possible. We lay a foundation for this new avenue of uncertainty explication and present SelfReflect, a theoretically-motivated metric to assess how faithfully a string summarizes an LLM’s internal answer distribution. We show that SelfReflect is able to discriminate even subtle differences of candidate summary strings and that it aligns with human judgement, outperforming alternative metrics such as LLM judges and embedding comparisons. With SelfReflect, we investigate a number of self-summarization methods and find that even state-of-the-art reasoning models struggle to explicate their internal uncertainty. But we find that faithful summarizations can be generated by sampling and summarizing. Our metric enables future works towards this universal form of LLM uncertainties.
Summaries are traditionally rated in terms of faithfulness to the long document, relevance of the chosen information, and fluency and coherence of their sentences [Särkkä and Solin, 2019], as rated by humans or recently by LM judges
The summary string s does not summarize another string but a distribution over strings pθ (A | q). This means we must go beyond comparing s to a specific string a ∼ pθ (A | q), to quantifying how faithfully s represents the density over the string space that pθ (A | q) defines, i.e., to all possible answers and how likely they are. To this end, we re-think masked-out tasks from the lens of sufficient statistics in the following section.
we make the task harder by comparing good to almost-good summaries, which only contain facts that are faithful to the answer distribution, but leave out some possibilities and details that the good summary mentions. SelfReflect gives the good summary a better score than the almost-good summaries in 94.2% of all questions. Most other approaches, including the LM judge used in literature, can no longer distinguish these fine-grained quality differences.