Role-Play and Persona Behavior
Related topics:
- Beyond Single Models: Enhancing LLM Detection of Ambiguity in Requests through DebateAbstract: Large Language Models (LLMs) have demonstrated significant capabilities in understanding and generating human language, contributing to more natural interactions with complex systems. Howeve…
- CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model SocietyThe rapid advancement of chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be…
- Can AI Have a Personality? Prompt Engineering for AI Personality Simulation: A Chatbot Case Study in Gender-Affirming Voice Therapy TrainingAbstract—This thesis investigates whether large language models (LLMs) can be guided to simulate a consistent personality through prompt engineering. The study explores this concept within the context…
- Character is Destiny: Can Role-Playing Language Agents Make Persona-Driven Decisions?Can Large Language Models (LLMs) simulate humans in making important decisions? Recent research has unveiled the potential of using LLMs to develop role-playing language agents (RPLAs), mimicking main…
- Consistently Simulating Human Personas with Multi-Turn Reinforcement LearningLarge Language Models (LLMs) are increasingly used to simulate human users in interactive settings such as therapy, education, and social role-play. While these simulations enable scalable training an…
- Cultural Evolution of Cooperation among LLM AgentsAt present, relatively little is known about the dynamics of multiple LLM agents interacting over many generations of iterative deployment. In this paper, we examine whether a “society” of LLM agents …
- Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sourceshttps:// CGMI: Configurable General Multi-Agent Interaction Framework [https://arxiv.org/abs/2308.12503](https://arxiv.org/abs/2308.12503) [[Memory]] [[Role Play]] “With the capabilities of large…
- Do Role-Playing Agents Practice What They Preach? Belief-Behavior Consistency in LLM-Based Simulations of Human TrustAs large language models (LLMs) are increasingly studied as role-playing agents to generate synthetic data for human behavioral research, ensuring that their outputs remain coherent with their assigne…
- Do Theory of Mind Benchmarks Need Explicit Human-like Reasoning in Language Models?Recent advancements in Large Language Models (LLMs) have shown promising performance on ToM benchmarks, raising the question: Do these benchmarks necessitate explicit human-like reasoning processes, o…
- H2HTalk: Evaluating Large Language Models as Emotional CompanionWe present Heart-to-Heart Talk (H2HTalk), a benchmark assessing companions across personality development and empathetic interaction, balancing emotional intelligence with linguistic fluency. H2HTalk …
- InMind: Evaluating LLMs in Capturing and Applying Individual Human Reasoning StylesLLMs have shown strong performance on human-centric reasoning tasks. While previous evaluations have explored whether LLMs can infer intentions or detect deception, they often overlook the individuali…
- Inspecting and Editing Knowledge Representations in Language Models[[Natural Language Inference]] Neural language models (LMs) represent facts about the world described by text. Sometimes these facts derive from training data (in most LMs, a representation of the …
- LLM Strategic Reasoning: Agentic Study through Behavioral Game TheoryWhat does it truly mean for a language model to “reason” strategically, and can scaling up alone guarantee intelligent, context-aware decisions? Strategic decision-making requires adaptive reasoning, …
- LLMs as Method Actors: A Model for Prompt Engineering and ArchitectureWe introduce “Method Actors” as a mental model for guiding LLM prompt engineering and prompt architecture. Under this mental model, LLMs should be thought of as actors; prompts as scripts and cues; an…
- MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent SystemsHuman social interactions depend on the ability to infer others’ unspoken intentions, emotions, and beliefs—a cognitive skill grounded in the psychological concept of Theory of Mind (ToM). While large…
- Multi-agent cooperation through in-context co-player inferenceAchieving cooperation among self-interested agents remains a fundamental challenge in multi-agent reinforcement learning. Recent work showed that mutual cooperation can be induced between “learning-aw…
- On the Adaptive Psychological Persuasion of Large Language ModelsHowever, systematic exploration of their dual capabilities to autonomously persuade and resist persuasion, particularly in contexts involving psychological rhetoric, remains unexplored. In this paper,…
- Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language ModelsOur approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities when conditioned by specific persona…
- PersuasiveToM: A Benchmark for Evaluating Machine Theory of Mind in Persuasive DialoguesThe ability to understand and predict the mental states of oneself and others, known as the Theory of Mind (ToM), is crucial for effective social scenarios. Although recent studies have evaluated ToM …
- Psychologically Enhanced AI AgentsWe introduce MBTI-in-Thoughts, a framework for enhancing the effectiveness of Large Language Model (LLM) agents through psychologically grounded personality conditioning. Drawing on the Myers–Briggs T…
- Role play with large language modelsHere we advocate two basic metaphors for LLM-based dialogue agents. First, taking a simple and intuitive view, we can see a dialogue agent as role-playing a single character. Second, taking a more nua…
- Role-Play with Large Language ModelsMurray Shanahan “What sorts of roles might the agent begin to take on? This is determined in part, of course, by the tone and subject matter of the ongoing conversation. But it is also determined, …
- RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language ModelsHowever, the closed-source nature of state-of-the-art LLMs and their general-purpose training limit role-playing optimization. In this paper, we introduce RoleLLM, a framework to benchmark, elicit, an…
- SOTOPIA: Interactive Evaluation for Social Intelligence in Language AgentsDescription automatically generated](file:////Users/adrianchan/Library/Group%20Containers/UBF8T346G9.Office/TemporaryItems/msohtmlclip/clip_image001.png) **** In our environment, agents role-play an…
- SPICE: Self-Play In Corpus Environments Improves ReasoningSelf-improving systems require environmental interaction for continuous adaptation. We introduce SPICE (Self-Play In Corpus Environments), a reinforcement learning framework where a single model acts …
- The Decrypto Benchmark for Multi-Agent Reasoning and Theory of MindAs Large Language Models (LLMs) gain agentic abilities, they will have to navigate complex multiagent scenarios, interacting with human users and other agents in cooperative and competitive settings. …
- Think in Games: Learning to Reason in Games via Reinforcement Learning with Large Language ModelsLarge language models (LLMs) excel at complex reasoning tasks such as mathematics and coding, yet they frequently struggle with simple interactive tasks that young children perform effortlessly. This …
- Thinking in Character: Advancing Role-Playing Agents with Role-Aware ReasoningThe advancement of Large Language Models (LLMs) has spurred significant interest in Role-Playing Agents (RPAs) for applications such as emotional companionship and virtual interaction. However, recent…
- Too Good to be Bad: On the Failure of LLMs to Role-Play VillainsLarge Language Models (LLMs) are increasingly tasked with creative generation, including the simulation of fictional characters. However, their ability to portray non-prosocial, antagonistic personas …
- Towards Safe and Honest AI Agents with Neural Self-Other OverlapAs AI systems increasingly make critical decisions, deceptive AI poses a significant challenge to trust and safety. We present Self-Other Overlap (SOO) fine-tuning, a promising approach in AI Safety t…
- Training Language Models for Social Deduction with Multi-Agent Reinforcement LearningCommunicating in natural language is a powerful tool in multiagent settings, as it enables independent agents to share information in partially observable settings and allows zero-shot coordination wi…
- Two Tales of Persona in LLMs: A Survey of Role-Playing and PersonalizationThe concept of persona, originally adopted in dialogue literature, has re-surged as a promising framework for tailoring large language models (LLMs) to specific context (e.g., personalized search, LLM…
- What we talk to when we talk to language modelsDavid Chalmers [[Linguistics, NLP, NLU]] [[Role Play]] [[Philosophy Subjectivity]] Quasi-interpretivism does not say anything about whether LLMs have beliefs and desires. But it does make it plausib…