Unlocking Varied Perspectives: A Persona-Based Multi-Agent Framework with Debate-Driven Text Planning for Argument Generation
Writing persuasive arguments is a challenging task for both humans and machines. It entails incorporating high-level beliefs from various perspectives on the topic, along with deliberate reasoning and planning to construct a coherent narrative. Current language models often generate surface tokens autoregressively, lacking explicit integration of these underlying controls, resulting in limited output diversity and coherence. In this work, we propose a persona-based multi-agent framework for argument writing. Inspired by the human debate, we first assign each agent a persona representing its high-level beliefs from a unique perspective, and then design an agent interaction process so that the agents can collaboratively debate and discuss the idea to form an overall plan for argument writing. Such debate process enables fluid and nonlinear development of ideas. We evaluate our framework on argumentative essay writing.
Firstly, it necessitates social understanding capabilities for a profound comprehension of the topic and the inclusion of varied, pertinent viewpoints to bolster the argument’s persuasiveness. Secondly, it demands strong logical reasoning and strategic text planning to create a coherent overarching structure, which integrates different viewpoints into a well-organized discourse. Lastly, fundamental writing skills are crucial for effectively transforming the plans into surface text.
Despite their effectiveness, LLMs often fail to offer diverse and rich content, particularly in generating subjective content with multiple viewpoints (Muscato et al., 2024; Hayati et al., 2023). This limitation arises because LLMs are trained to model averages and may overlook the nuance and in-group variation of perspectives (Sorensen et al., 2024).
LLMs often generate text autoregressively without explicit planning contrasting with
human writing that typically involves extensive planning to establish a coherent high-level logic flow
Additionally, a critic agent is integrated to challenge the idea presented, ensuring a robust discussion. During the debate, agents engage in dialogue, respond to critiques, and progressively refine their ideas. This collaboration not only fosters creativity and critical thinking but also aids in self revision and self-critic. The discussions are then distilled into an argument plan that offers diverse viewpoints and maintains logical coherence. Unlike previous planning methods that sequentially outline content (Hu et al., 2022b; Goldfarb-Tarrant et al., 2020; Yang et al., 2022), our debate-driven planning allows fluid and nonlinear development of ideas, where agents can dynamically shift between proposals, revisit earlier concepts, and organically evolve the discussion.
multiagent framework generates an argument (y) with the following steps: (1) persona assignment, which creates and assigns an underlying persona to each agent; (2) debate-based planning, where agents collaboratively engage in debate and discussion to form a high-level plan; (3) argument writing
Persona Pool Creation. We instruct LLMs to create a pool of 5 to 10 personas, each embodying a distinct viewpoint relevant to the topic. We formalize a persona with a brief description and a claim on the topic, as illustrated in Figure 1. To ensure fairness and inclusivity, the model is directed to create personas representing a diverse range of communities and perspectives, which encourages the model consideration of nuance and in-group variation (Sorensen et al., 2024).
In our framework, the N agents form a main team, fostering collective discussions and developing a plan outlining the high-level logical flow. Additionally, we introduce a critic agent representing an opposing viewpoint. The role of the critic is to identify and challenge weaknesses in the main team’s proposals. Incorporating such a critic is crucial, as a robust argument necessitates anticipating opposing perspectives and devising effective rebuttals during discussions.
We implement all modules by prompting an LLM. For baselines, we include: (1) Directly prompting an LLM (LLM-E2E) to write an argument essay in an end-to-end manner; (2) Chain-of-Thought Prompting for content planning (LLM-Plan), where the model first generates an overall plan and then produce the argument (Wei et al., 2022); (3) AGENT-DEBATE: multi-agent debate for planning without persona assignment (Liang et al., 2023); (4) AMERICANO: decomposed argument generation with discourse-driven planning (Hu et al., 2023). We utilize ChatGPT as the backbone LLM for all methods.