Empirical Study of Symmetrical Reasoning in Conversational Chatbots
This work explores the capability of conversational chatbots powered by large language models (LLMs), to understand and characterize predicate symmetry, a cognitive linguistic function traditionally believed to be an inherent human trait. Leveraging in-context learning (ICL), a paradigm shift enabling chatbots to learn new tasks from prompts without re-training, we assess the symmetrical reasoning of five chatbots: ChatGPT 4, Huggingface chat AI, Microsoft’s Copilot AI, LLaMA through Perplexity, and Gemini Advanced.
Linguistic symmetry is one of such fundamental properties of natural language [11] and can be present at word-level, sentence-level and conceptual level [12]. For instance, the existence of symmetry in language structures is what allows humans to infer ‘Mary met John’ from ‘John met Mary’ implicitly.
The symmetry inference of the word kissed is not the same in ‘John and Mary kissed’ as it is in ‘John kissed Mary.’ For this reason, the symmetrical property of a word cannot be deterministically assigned and requires human cognition for its categorization. Given that conversational chatbots are viewed as analogous to humans, we aim to investigate their ability to grasp predicate symmetry.