Can evolutionary search beat sampling and revision at inference time?
Can LLMs evolve populations of solutions through recombination and selection to outperform simpler inference strategies? This matters because it could reveal whether biological-inspired search improves planning without formal problem definitions.
Mind Evolution is an evolutionary search strategy for LLM inference that evolves a diverse population of candidate solutions. The LLM generates, recombines, and refines candidates based on evaluator feedback. This is analogous to combining divergent thinking (free-flowing parallel exploration) with convergent thinking (evaluation and selection) — considered hallmarks of intelligent problem-solving.
The key advantage over previous inference strategies: Mind Evolution works in natural language spaces without requiring task formalization. It only needs a programmatic solution evaluator — exploiting the observation that evaluating a candidate solution is often easier than generating one. This removes the need for formal problem definitions, expert-designed search spaces, or auxiliary verifiers.
Three mechanisms drive effectiveness:
- Population diversity via island model: Distinct sub-populations evolve independently between migration and reset events. Migration moves high-fitness solutions across islands; island reset replaces low-fitness populations with strong solutions from the global pool. This sustains exploration diversity that single-population evolution loses.
- LLM-based genetic operators: Instead of traditional mutation and crossover on symbolic representations, the LLM itself recombines and refines candidates using natural language understanding. This enables meaningful variation in unstructured solution spaces.
- Fitness-proportional selection: Parents with greater fitness are more likely to be selected for recombination, creating progressive quality improvement.
On TravelPlanner and Natural Plan benchmarks, Mind Evolution solves more than 98% of problem instances using Gemini 1.5 Pro — significantly outperforming Best-of-N and Sequential Revision when controlling for inference cost.
This extends the test-time compute landscape beyond the standard parallel-vs-sequential tradeoff. Mind Evolution is neither pure parallel sampling (Best-of-N) nor pure sequential refinement — it is iterative population evolution that combines elements of both. The island model specifically addresses the diversity collapse problem that Do iterative refinement methods suffer from overthinking? identifies — by maintaining multiple independent populations, evolution sustains exploration where single-trajectory refinement converges prematurely.
Source: Novel Architectures
Related concepts in this collection
-
Why does majority voting outperform more complex inference methods?
Simple majority voting across independent samples often matches or beats sophisticated alternatives like Best-of-N and sequential revision. What makes this basic approach so hard to beat for reasoning models?
Mind Evolution goes beyond voting: population-based recombination rather than just aggregation
-
Do iterative refinement methods suffer from overthinking?
Iterative refinement approaches like Self-Refine structurally resemble token-level overthinking in o1-like models. Does revision across multiple inference calls reproduce the same accuracy degradation seen within single inferences?
evolutionary approach avoids this through population diversity and island model
-
How should we balance parallel versus sequential compute at test time?
Test-time compute can prioritize breadth (trying many approaches) or depth (refining one approach). Which strategy works better, and does the answer depend on the problem?
Mind Evolution transcends this dichotomy: iterative evolution with parallel sub-populations
-
Can tree search replace human feedback in LLM training?
Explores whether Monte Carlo Tree Search can generate quality signals for self-improvement without expensive human annotations. Matters because annotation bottlenecks currently limit LLM scaling.
MCTS searches a tree; Mind Evolution searches a population; both use structured exploration
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
evolutionary search at inference time outperforms best-of-n and sequential revision on natural language planning