Do Large Language Models Understand Conversational Implicature -- A case study with a chinese sitcom

Paper · arXiv 2404.19509 · Published April 30, 2024
Philosophy SubjectivityLinguistics, NLP, NLU

Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn dialogue- based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom My Own Swordsman. It includes 200 carefully handcrafted questions, all annotated on which Gricean maxims have been violated. We test eight close-source and opensource LLMs under two tasks: a multiple-choice question task and an implicature explanation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions.

While all models generate largely fluent and self-consistent text, their explanations score low on reasonability except for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of the implicatures in the conversation. Moreover, we find LLMs’ performance does not vary significantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derived from different maxims differently.