LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools

Paper · arXiv 2401.12576 · Published January 23, 2024
Evaluations

With LLMCHECKUP, we present an easily accessible tool that allows users to chat with any state-of-the-art large language model (LLM) about its behavior. We enable LLMs to generate all explanations by themselves and take care of intent recognition without fine-tuning, by connecting them with a broad spectrum of Explainable AI (XAI) tools, e.g. feature attributions, embedding-based similarity, and prompting strategies for counterfactual (self-) explanations are presented as an interactive dialogue that supports that supports follow-up questions