Reinforcement Learning for LLMs Agentic and Multi-Agent Systems LLM Reasoning and Architecture

Can evolutionary search beat sampling and revision at inference time?

Can LLMs evolve populations of solutions through recombination and selection to outperform simpler inference strategies? This matters because it could reveal whether biological-inspired search improves planning without formal problem definitions.

Note · 2026-02-23 · sourced from Novel Architectures

Mind Evolution is an evolutionary search strategy for LLM inference that evolves a diverse population of candidate solutions. The LLM generates, recombines, and refines candidates based on evaluator feedback. This is analogous to combining divergent thinking (free-flowing parallel exploration) with convergent thinking (evaluation and selection) — considered hallmarks of intelligent problem-solving.

The key advantage over previous inference strategies: Mind Evolution works in natural language spaces without requiring task formalization. It only needs a programmatic solution evaluator — exploiting the observation that evaluating a candidate solution is often easier than generating one. This removes the need for formal problem definitions, expert-designed search spaces, or auxiliary verifiers.

Three mechanisms drive effectiveness:

  1. Population diversity via island model: Distinct sub-populations evolve independently between migration and reset events. Migration moves high-fitness solutions across islands; island reset replaces low-fitness populations with strong solutions from the global pool. This sustains exploration diversity that single-population evolution loses.
  2. LLM-based genetic operators: Instead of traditional mutation and crossover on symbolic representations, the LLM itself recombines and refines candidates using natural language understanding. This enables meaningful variation in unstructured solution spaces.
  3. Fitness-proportional selection: Parents with greater fitness are more likely to be selected for recombination, creating progressive quality improvement.

On TravelPlanner and Natural Plan benchmarks, Mind Evolution solves more than 98% of problem instances using Gemini 1.5 Pro — significantly outperforming Best-of-N and Sequential Revision when controlling for inference cost.

This extends the test-time compute landscape beyond the standard parallel-vs-sequential tradeoff. Mind Evolution is neither pure parallel sampling (Best-of-N) nor pure sequential refinement — it is iterative population evolution that combines elements of both. The island model specifically addresses the diversity collapse problem that Do iterative refinement methods suffer from overthinking? identifies — by maintaining multiple independent populations, evolution sustains exploration where single-trajectory refinement converges prematurely.


Source: Novel Architectures

Related concepts in this collection

Concept map
15 direct connections · 181 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

evolutionary search at inference time outperforms best-of-n and sequential revision on natural language planning