Branch-Solve-Merge Improves Large Language Model Evaluation and Generation

Paper · arXiv 2310.15123 · Published October 23, 2023
Self Refinement Self Consistency FeedbackNatural Language InferenceTasks Planning

Large Language Models (LLMs) are frequently used for multi-faceted language generation and evaluation tasks that involve satisfying intricate user constraints or taking into account multiple aspects and criteria. However, their performance can fall short, due to the model’s lack of coherence and inability to plan and decompose the problem. We propose BRANCH-SOLVE-MERGE (BSM), a Large Language Model program (Schlag et al., 2023) for tackling such challenging natural language tasks. It consists of branch, solve, and merge modules that are parameterized with specific prompts to the base LLM. These three modules plan a decomposition of the task into multiple parallel sub-tasks, independently solve them, and fuse the solutions to the sub-tasks. We apply our method to the tasks of LLM response evaluation and constrained text generation and evaluate its effectiveness with multiple LLMs, including Vicuna, LLaMA-2-chat, and GPT-4. BSM improves the evaluation correctness and consistency for each LLM by enhancing human-LLM agreement by up to 26%, reducing length and pairwise position biases by up to 50%, and allowing LLaMA- 2-chat to match or outperform GPT-4 on most domains.

However, LLMs still struggle with tasks that have intricate requirements like satisfying a set of constraints or meeting objectives that are, in general, multi-dimensional (e.g., evaluating the quality of generated text against certain diverse criteria). This appears to primarily stem from the model’s lack of self-consistency and inability to plan (Yao et al., 2023b; Bubeck et al., 2023). Recent research has tried to mitigate these limitations by developing iterative methods that involve eliciting reasoning, planning, and refinement, but so far they are still considered as open problems (Bai et al., 2022b; Madaan et al., 2023; Ganguli et al., 2023; Yao et al., 2023c; Chen

Our approach is an instance of a Large Language Model program (Schlag et al., 2023; Dohan et al., 2022) and consists of three modules: branch, solve, and merge that are parameterized with specific prompts to an underlying LLM.

Given an arbitrary user task, the ‘branch’ module generates a solution plan by decomposing the task into multiple parallel subtasks, where each sub-task is represented by a unique branch, representing different components required to solve the overall problem. The ‘solve’ module then solves each of these independent subproblems. Finally, the ‘merge’ module fuses the solutions to these sub-problems to generate the overall solution.

Constrained Text Generation. State-of-the-art LLMs struggle with constrained text generation tasks, e.g., the constraint of writing a story that should include several concepts. Models commonly either violate constraints, or else generate text that is incoherent in order to satisfy these constraints