📂 Github | ✍️ Blog | 📑 PDF

Introduction


As LLM-driven optimization research has taken off, combining large language models with evolutionary algorithms has become an increasingly active area. Frameworks like AlphaEvolve, OpenEvolve, and ShinkaEvolve have led this charge — all sharing the same core loop: an LLM generates candidate solutions, which are iteratively improved through evolutionary search.

But these approaches share a common blind spot. The search strategy is fixed. Which candidates to select as parents, which variation operator to apply, how to construct the inspiration context — all of these decisions are hardcoded by human designers before the search begins.

Existing LLM-based evolutionary frameworks — LLMasES, AlphaEvolve, ShinkaEvolve all rely on manually configured, fixed search strategies

Existing LLM-based evolutionary frameworks — LLMasES, AlphaEvolve, ShinkaEvolve all rely on manually configured, fixed search strategies

EvoX attacks this problem directly: "Don't just evolve the solutions — evolve the strategy for finding them."

Why Fixed Search Strategies Fall Short


The optimization process is fundamentally non-stationary. As search progresses, the quality distribution of the candidate population, its diversity, and the effectiveness of variation operators all shift continuously. A strategy that works brilliantly in the early exploration phase can become a liability later when you need precise refinement.

A fixed MAP-Elites strategy (red) plateaus after ~50 iterations. Switching to a strategy that explicitly samples along multi-objective trade-offs (blue) keeps performance climbing

A fixed MAP-Elites strategy (red) plateaus after ~50 iterations. Switching to a strategy that explicitly samples along multi-objective trade-offs (blue) keeps performance climbing

The pattern holds across problem types too. Some problems respond well to local refinement — small, incremental edits. Others, like circle packing, require completely restructuring the solution before any real progress is possible. A single hardcoded strategy simply cannot generalize across this diversity.

The core problem: Existing methods rely on manually configured values. A fixed search strategy often fails to generalize across different problems or across different stages of the same optimization run.

EvoX's Core Idea: Two Concurrent Evolution Loops


Traditional methods (left) vs. EvoX's Two-level Evolution (right) — EvoX treats the search strategy itself as an evolvable object

Traditional methods (left) vs. EvoX's Two-level Evolution (right) — EvoX treats the search strategy itself as an evolvable object

EvoX reframes LLM-driven optimization as a two-level evolutionary process running simultaneously.

<aside> ❌

Traditional Methods

<aside> ✅

EvoX

1. Solution Evolution

The standard loop: the current search strategy selects a parent, applies a variation operator, generates a new candidate via LLM, evaluates it, and updates the population database. This is what every existing framework does.