Theory of Mind
Related topics:
- A Systematic Review on the Evaluation of Large Language Models in Theory of Mind TasksThis systematic review synthesizes current efforts to assess LLMs’ ability to perform ToM tasks—an essential aspect of human cognition involving the attribution of mental states to oneself and others.…
- AI Models Exceed Individual Human Accuracy in Predicting Everyday Social NormsA fundamental question in cognitive science concerns how social norms are acquired and represented. While humans typically learn norms through embodied social experience, we investigated whether large…
- Character is Destiny: Can Role-Playing Language Agents Make Persona-Driven Decisions?Can Large Language Models (LLMs) simulate humans in making important decisions? Recent research has unveiled the potential of using LLMs to develop role-playing language agents (RPLAs), mimicking main…
- DPMT: Dual Process Multi-scale Theory of Mind Framework for Real-time Human-AI CollaborationReal-time human-artificial intelligence (AI) collaboration is crucial yet challenging, especially when AI agents must adapt to diverse and unseen human behaviors in dynamic scenarios. Existing large l…
- Do LLMs Exhibit Human-Like Reasoning? Evaluating Theory of Mind in LLMs for Open-Ended ResponsesDespite advancements, the extent to which LLMs truly understand ToM reasoning and how closely it aligns with human ToM reasoning remains inadequately explored in open-ended scenarios. Motivated by thi…
- Do Theory of Mind Benchmarks Need Explicit Human-like Reasoning in Language Models?Recent advancements in Large Language Models (LLMs) have shown promising performance on ToM benchmarks, raising the question: Do these benchmarks necessitate explicit human-like reasoning processes, o…
- Does It Make Sense to Speak of Introspection in Large Language Models?Large language models (LLMs) exhibit compelling linguistic behaviour, and sometimes offer self-reports, that is to say statements about their own nature, inner workings, or behaviour. In humans, such …
- Evaluating Large Language Models in Theory of Mind TasksMany animals excel at using cues such as vocalization, body posture, gaze, or facial expression to predict other animals’ behavior and mental states. Dogs, for example, can easily distinguish between …
- Evaluating Theory of Mind and Internal Beliefs in LLM-Based Multi-Agent SystemsAbstract. LLM-based MAS are gaining popularity due to their potential for collaborative problem-solving enhanced by advances in natural language comprehension, reasoning, and planning. Research in The…
- Expedient Assistance and Consequential Misunderstanding: Envisioning an Operationalized Mutual Theory of MindDesign fictions allow us to prototype the future. They enable us to interrogate emerging or non-existent technologies and examine their implications. We present three design fictions that probe the po…
- Hypothesis-Driven Theory-of-Mind Reasoning for Large Language ModelsExisting LLM reasoning methods have shown impressive capabilities across various tasks, such as solving math and coding problems. However, applying these methods to scenarios without ground-truth answ…
- Improving Dialog Systems for Negotiation with Personality ModelingIn this paper, we explore the ability to model and infer personality types of opponents, predict their responses, and use this information to adapt a dialog agent’s high-level strategy in negotiation …
- InMind: Evaluating LLMs in Capturing and Applying Individual Human Reasoning StylesLLMs have shown strong performance on human-centric reasoning tasks. While previous evaluations have explored whether LLMs can infer intentions or detect deception, they often overlook the individuali…
- MOMENTS: A Comprehensive Multimodal Benchmark for Theory of MindUnderstanding Theory of Mind is essential for building socially intelligent multimodal agents capable of perceiving and interpreting human behavior. We introduce MOMENTS (Multimodal Mental States), a …
- Machine Psychologywe highlight and summarize theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table. It paves the way for a "machine psychology" f…
- MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent SystemsHuman social interactions depend on the ability to infer others’ unspoken intentions, emotions, and beliefs—a cognitive skill grounded in the psychological concept of Theory of Mind (ToM). While large…
- PATIENT-Ψ: Using Large Language Models to Simulate Patients for Training Mental Health ProfessionalsMental illness remains one of the most critical public health issues. Despite its importance, many mental health professionals highlight a disconnect between their training and actual real-world patie…
- PersuasiveToM: A Benchmark for Evaluating Machine Theory of Mind in Persuasive DialoguesThe ability to understand and predict the mental states of oneself and others, known as the Theory of Mind (ToM), is crucial for effective social scenarios. Although recent studies have evaluated ToM …
- The Decrypto Benchmark for Multi-Agent Reasoning and Theory of MindAs Large Language Models (LLMs) gain agentic abilities, they will have to navigate complex multiagent scenarios, interacting with human users and other agents in cooperative and competitive settings. …
- Theory of Mind abilities of Large Language Models in Human-Robot Interaction : An Illusion?we study a special application of ToM abilities that has higher stakes and possibly irreversible consequences : Human Robot Interaction. In this work, we explore the task of Perceived Behavior Recogni…
- Towards A Holistic Landscape of Situated Theory of Mind in Large Language ModelsIn this position paper, we seek to answer two road-blocking questions: (1) How can we taxonomize a holistic landscape of machine ToM? (2) What is a more effective evaluation protocol for machine ToM? …
- Towards Machine Theory of Mind with Large Language Model-Augmented Inverse PlanningWe propose a hybrid approach to machine Theory of Mind (ToM) that uses large language models (LLMs) as a mechanism for generating hypotheses and likelihood functions with a Bayesian inverse planning m…
- Towards Safe and Honest AI Agents with Neural Self-Other OverlapAs AI systems increasingly make critical decisions, deceptive AI poses a significant challenge to trust and safety. We present Self-Other Overlap (SOO) fine-tuning, a promising approach in AI Safety t…
- Training Language Models for Social Deduction with Multi-Agent Reinforcement LearningCommunicating in natural language is a powerful tool in multiagent settings, as it enables independent agents to share information in partially observable settings and allows zero-shot coordination wi…