AutoEoH is an intelligent, multi-agent system for automated heuristic algorithm design. Built on top of the EoH (Evolution of Heuristics) engine, it orchestrates the entire workflow—from problem understanding to heuristic evolution and evaluation—through a coordinated set of specialized agents.
Instead of manually designing heuristics, AutoEoH lets you describe your problem in natural language or provide an existing codebase, and it will automatically:
- formulate the optimization task,
- extract or define heuristic components,
- validate and execute the evolutionary search, and
- deliver high-quality heuristics with summarized results.
Designed for seamless integration with Claude Code and Codex, AutoEoH introduces a natural-language-driven development paradigm for algorithm design. A unified orchestration contract (AGENTS.md) ensures consistent behavior across platforms, with a lightweight adapter (CLAUDE.md) for Claude-specific execution.
AutoEoH works out of the box with widely used optimization libraries such as Pymoo, PlatEMO, DEAP, Nevergrad, PyGMO, and Optuna, enabling you to rapidly build customized heuristic design pipelines on top of familiar ecosystems. The generated systems are automatically tested in a lightweight manner, and can be easily scaled to full experiments by adjusting parameters in runEoH.py.
The entire pipeline usually takes 10 - 30 minutes.
Problem description / existing solver code
-> TaskCreator (from_description)
CodebaseAnalyzer -> TaskFormulator (from_code)
-> TaskChecker
-> EvaluationChecker
-> EoHRunner
-> ResultsChecker
-> Best heuristics + run summary
| Agent | Role |
|---|---|
TaskCreator |
Generate a complete EoH task package from a plain-text problem description |
CodebaseAnalyzer |
Identify the key heuristic component in an existing codebase |
TaskFormulator |
Extract the heuristic decision function from an existing code project and generate a task package |
TaskChecker |
Validate and auto-repair the generated task package before running EoH |
EvaluationChecker |
Diagnose and validate the evaluation function; catch null-score issues before the full run |
EoHRunner |
Configure and execute the EoH search |
ResultsChecker |
Parse EoH logs and write a human-readable report |
git clone <repo-url>
cd AutoEoH
pip install -r requirements.txt
pip install -e ./EoHInstall the Claude Code CLI, then open the repository:
npm install -g @anthropic-ai/claude-code
cd AutoEoH
claudeClaude Code reads CLAUDE.md automatically and follows the workflow defined
in AGENTS.md. No additional setup is required beyond LLM credentials in
config.yaml.
Install the OpenAI Codex CLI, then open the repository:
npm install -g @openai/codex
cd AutoEoH
codexCodex reads AGENTS.md directly. The same workflow applies.
Edit config.yaml before running. Credentials should come from environment
variables rather than being committed to the file.
llm:
provider: "openai_compatible" # or "anthropic"
model: "deepseek-v3" # model name for your provider
base_url: "api.example.com" # required for openai_compatible providers
api_key: "" # leave blank; set via env variable
eoh:
max_generations: 4
pop_size: 4
safe_evaluate: true
evaluation_timeout_seconds: 90Set credentials via environment variable (takes precedence over config.yaml):
export AUTOEOH_LLM_API_KEY="sk-..." # preferred generic key
export OPENAI_API_KEY="sk-..." # fallback for OpenAI-compatible
export ANTHROPIC_API_KEY="sk-ant-..." # fallback for AnthropicOther overridable variables: AUTOEOH_LLM_PROVIDER, AUTOEOH_LLM_MODEL,
AUTOEOH_LLM_BASE_URL.
This example evolves the MOEA/D leader-selection heuristic in examples/01_pymoo
using the bundled pymoo codebase.
Step 1 — configure your LLM
export AUTOEOH_LLM_API_KEY="sk-..."Edit config.yaml to set provider, model, and base_url for your LLM.
Step 2 — run
Choose one of the three approaches below.
Formulate a task from the code in examples/01_pymoo and run EoH.
Claude Code reads description.md from the example folder, runs the full
pipeline, and reports the best heuristic score.
Formulate a task from examples/01_pymoo/pymoo-main and evolve the heuristic.
python run_pipeline.py --mode from_code \
--code-dir examples/01_pymoo/pymoo-mainOutputs are written to examples/01_pymoo/output/task/ (task package) and
examples/01_pymoo/output/run/ (EoH results and summary).
Step 3 — inspect results
cat examples/01_pymoo/output/run/results_summary.mdThe summary lists the top-k heuristics, scores, and total LLM cost.
The running log and results will usually in examples/01_pymoo/output/run.
# from_task: run EoH on an existing task package
python run_pipeline.py examples/01_pymoo/output/task
# from_code: analyse a codebase, formulate a task, then run EoH
python run_pipeline.py --mode from_code --code-dir examples/01_pymoo/pymoo-main
# skip or run TaskChecker offline
python run_pipeline.py examples/01_pymoo/output/task --skip-task-checker
python run_pipeline.py examples/01_pymoo/output/task --offline-task-checkerOpen the repository and rely on AGENTS.md.
Examples:
- "Run AutoEoH from the description in
examples/07_optuna/description.md." - "Formulate a task from the solver code in
examples/02_deap/and run EoH." - "Check the latest results in
examples/04_pygmo/output/run."
Open the repository and rely on CLAUDE.md.
That file delegates to the same workflow defined in AGENTS.md.
User request in natural language
|
+-- Codex reads AGENTS.md
| |
| `-- step-level instructions in agents/*/agent.md
|
`-- Claude Code reads CLAUDE.md
|
`-- delegates to AGENTS.md
AGENTS.md / CLAUDE.md
|
+-- CodebaseAnalyzer spec: agents/codebase_analyzer/agent.md
| `-- implementation: agents/codebase_analyzer/__init__.py
|
+-- TaskCreator spec: agents/task_creator/agent.md
| `-- implementation: agents/task_creator/__init__.py
|
+-- TaskFormulator spec: agents/task_formulator/agent.md
| `-- implementation: agents/task_formulator/__init__.py
|
+-- TaskChecker spec: agents/task_checker/agent.md
| `-- implementation: agents/task_checker/__init__.py
|
+-- EvaluationChecker spec: agents/evaluation_checker/agent.md
| `-- implementation: agents/evaluation_checker/__init__.py
|
+-- EoHRunner spec: agents/eoh_runner/agent.md
| `-- implementation: agents/eoh_runner/__init__.py
|
`-- ResultsChecker spec: agents/results_checker/agent.md
`-- implementation: agents/results_checker/__init__.py
Shared runtime
|
+-- config loading: shared/config_loader.py
+-- pipeline orchestration: shared/pipeline_orchestrator.py
+-- task contract: shared/task_package.py
+-- task preflight: shared/task_preflight.py
+-- description parsing: shared/description_parser.py
+-- anthropic backend: shared/anthropic_api.py
+-- compatibility alias: shared/claude_api.py
+-- cost tracking: shared/cost_tracker.py
+-- artifact cache: shared/cache_store.py
`-- search/evolution engine: EoH/
In short:
.mdfiles define orchestration rules and step contracts for agent-driven execution..pyfiles implement the actual logic those contracts refer to.
All steps exchange a TaskPackage; see
shared/task_package.py.
When a task package is saved to disk, the folder includes:
task_dir/
|- template.py
|- evaluation.py
|- get_instance.py
|- task_meta.json
`- runEoH.py
runEoH.py is a user-facing helper script generated next to the task files.
It loads the local task package and lets users adjust EoH settings for heavier
experiments without modifying the main AutoEoH repository code.
We welcome contributions! 🛠️
- Add your algorithm code base by simpley put it inside examples and write a demo description.md !
- Report bugs or request features via GitHub issues.
- Add or improve agents, integrations, or examples.
- Enhance documentation or tutorials.
- Follow
AGENTS.mdconventions and test changes before submitting a pull request.
