Quant research workflows and trading agents from one YAML project.
QuantTradeAI is a YAML-first, CLI-first framework for traders, researchers, and developers who want a practical path from market data to research runs, backtests, and trading agents. The happy path is intentionally simple: define one project, run it from the CLI, and inspect standardized artifacts for every run.
Getting Started | Project YAML | Quick Reference | Configuration | Roadmap | Contributing
Tip
New users should start with config/project.yaml. It is the canonical entrypoint for init, validate, research run, and agent run.
- Want the fastest working path? Jump to Research In 4 Commands
- Already have a trained model? Jump to Run A Model Agent
- Evaluating prompt-driven agents? Jump to Run An LLM Agent
- Need the full config shape? Jump to What A Project Looks Like
- Comparing current capabilities? Jump to Current Support
- One project file: keep research and agents in the same
config/project.yaml - One clear CLI: initialize, validate, run research, and run agents with a small command surface
- Shared primitives: reuse symbols, features, and time windows across workflows
- Run visibility by default: each run writes resolved configs, metrics, and artifacts to disk
- YAML first, Python extendable: common workflows require little or no framework code
| I want to... | Best path today | What I get |
|---|---|---|
| Research a strategy end to end | init -> validate -> research run -> promote --run research/<run_id> |
Time-aware evaluation, backtests, metrics, run records, and a stable promoted model path |
| Run a deterministic rule agent | init --template rule-agent -> agent run --mode backtest -> promote -> agent run --mode paper -> promote --to live -> agent run --mode live |
A YAML-only agent that can move through backtest, paper, and live with explicit promotion gates |
| Run a trained model as an agent | init --template model-agent -> validate -> agent run --mode backtest -> promote -> agent run --mode paper -> promote --to live -> agent run --mode live |
One YAML-defined model agent wired to a stable models/promoted/... path that can be backtested, promoted, paper-run, and live-run |
| Run an LLM agent | init --template llm-agent -> agent run --mode backtest -> promote -> agent run --mode paper -> promote --to live -> agent run --mode live |
Prompt-driven agent logic using project config across all three modes |
| Run a hybrid agent | init --template hybrid -> research run -> promote --run research/<run_id> -> agent run --mode backtest -> promote -> agent run --mode paper -> promote --to live -> agent run --mode live |
Model signals plus LLM reasoning in one project, with research outputs promoted into a stable path before the agent is promoted through environments |
| Generate a Docker Compose deployment bundle | deploy --agent <name> --target docker-compose |
A paper-agent bundle with compose, Dockerfile, env placeholders, and resolved config |
| Keep using the older live loop | live-trade with runtime YAML files |
Legacy compatibility for existing setups |
flowchart LR
A["config/project.yaml"] --> B["validate"]
A --> C["research run"]
A --> D["agent run"]
C --> E["models/experiments/..."]
E --> F["promote --run research/..."]
F --> G["models/promoted/..."]
G --> D
C --> K["runs/research/..."]
D --> H["runs/agent/backtest/..."]
D --> I["runs/agent/paper/..."]
D --> J["runs/agent/live/..."]
QuantTradeAI is one framework with two connected tracks:
- Research: data -> features -> labels -> training -> evaluation -> backtest -> run records
- Agents: YAML-defined
rule,model,llm, andhybridagents that reuse the same project definitions
| Workflow | Status |
|---|---|
research run from project.yaml |
Supported |
agent run for rule agents in backtest |
Supported |
agent run for rule agents in paper |
Supported |
agent run for rule agents in live |
Supported |
agent run for model agents in backtest |
Supported |
agent run for model agents in paper |
Supported |
agent run for model agents in live |
Supported |
agent run for llm and hybrid agents in backtest |
Supported |
agent run for llm and hybrid agents in paper |
Supported |
agent run for llm and hybrid agents in live |
Supported |
| Research-run promotion to stable model paths | Supported |
| Agent backtest-to-paper promotion | Supported |
| Agent paper-to-live promotion with acknowledgement | Supported |
deploy --target docker-compose for paper agents |
Supported |
live-trade legacy runtime YAML workflow |
Supported for compatibility |
Note
agent run --mode live is the canonical live path for project-defined agents. live-trade still exists for legacy runtime YAML workflows and does not read config/project.yaml.
QuantTradeAI requires Python 3.11+.
git clone https://github.com/AKKI0511/QuantTradeAI.git
cd QuantTradeAI
poetry install --with dev
poetry run quanttradeai --helpIf you prefer a package install, pip install . also works.
Use this if you want the simplest end-to-end quant workflow.
poetry run quanttradeai init --template research -o config/project.yaml
poetry run quanttradeai validate -c config/project.yaml
poetry run quanttradeai research run -c config/project.yaml
poetry run quanttradeai runs list
poetry run quanttradeai runs list --scoreboard --sort-by net_sharpeThis path gives you:
- a canonical project config
- resolved-config validation output
- a research run with metrics and artifacts
- standardized outputs under
runs/research/... - a quick scoreboard view for ranking local runs by the metrics that matter
To make a winning research artifact available to model or hybrid agents through a stable path:
poetry run quanttradeai promote --run research/<run_id> -c config/project.yamlUse this if you want the smallest deterministic agent workflow with no LLM dependency.
poetry run quanttradeai init --template rule-agent -o config/project.yaml
poetry run quanttradeai validate -c config/project.yaml
poetry run quanttradeai agent run --agent rsi_reversion -c config/project.yaml --mode backtest
poetry run quanttradeai promote --run agent/backtest/<run_id> -c config/project.yaml
poetry run quanttradeai agent run --agent rsi_reversion -c config/project.yaml --mode paper
poetry run quanttradeai promote --run agent/paper/<run_id> -c config/project.yaml --to live --acknowledge-live rsi_reversion
poetry run quanttradeai agent run --agent rsi_reversion -c config/project.yaml --mode liveThe default template wires a simple RSI threshold rule through YAML only:
agents:
- name: "rsi_reversion"
kind: "rule"
mode: "paper"
rule:
preset: "rsi_threshold"
feature: "rsi_14"
buy_below: 30.0
sell_above: 70.0Use this if you already have a trained model artifact and want one YAML-defined agent that can run in backtest, paper, and live mode.
poetry run quanttradeai init --template model-agent -o config/project.yaml
poetry run quanttradeai validate -c config/project.yaml
# Replace models/promoted/aapl_daily_classifier/ with a real trained model artifact
poetry run quanttradeai agent run --agent paper_momentum -c config/project.yaml --mode backtest
poetry run quanttradeai promote --run agent/backtest/<run_id> -c config/project.yaml
poetry run quanttradeai agent run --agent paper_momentum -c config/project.yaml --mode paper
poetry run quanttradeai promote --run agent/paper/<run_id> -c config/project.yaml --to live --acknowledge-live paper_momentum
poetry run quanttradeai agent run --agent paper_momentum -c config/project.yaml --mode liveImportant
The model-agent template creates a placeholder directory at models/promoted/aapl_daily_classifier/. Replace it with a promoted research model artifact or another compatible saved model before running the agent.
Use this if you want prompt-driven agent logic from YAML and want to move the same agent definition from backtest into paper and live mode.
poetry run quanttradeai init --template llm-agent -o config/project.yaml
poetry run quanttradeai validate -c config/project.yaml
poetry run quanttradeai agent run --agent breakout_gpt -c config/project.yaml --mode backtest
poetry run quanttradeai promote --run agent/backtest/<run_id> -c config/project.yaml
poetry run quanttradeai agent run --agent breakout_gpt -c config/project.yaml --mode paper
poetry run quanttradeai promote --run agent/paper/<run_id> -c config/project.yaml --to live --acknowledge-live breakout_gpt
poetry run quanttradeai agent run --agent breakout_gpt -c config/project.yaml --mode liveUse this if you want to combine trained model signals and LLM reasoning in one project and then promote the same agent through paper and live mode.
poetry run quanttradeai init --template hybrid -o config/project.yaml
poetry run quanttradeai validate -c config/project.yaml
poetry run quanttradeai research run -c config/project.yaml
poetry run quanttradeai promote --run research/<run_id> -c config/project.yaml
poetry run quanttradeai agent run --agent hybrid_swing_agent -c config/project.yaml --mode backtest
poetry run quanttradeai promote --run agent/backtest/<run_id> -c config/project.yaml
poetry run quanttradeai agent run --agent hybrid_swing_agent -c config/project.yaml --mode paper
poetry run quanttradeai promote --run agent/paper/<run_id> -c config/project.yaml --to live --acknowledge-live hybrid_swing_agent
poetry run quanttradeai agent run --agent hybrid_swing_agent -c config/project.yaml --mode liveThe default hybrid template is prewired to models/promoted/aapl_daily_classifier, so you do not need to hand-edit timestamped experiment paths after the research run.
Use this if you want a generated Docker Compose bundle for a project-defined paper agent.
poetry run quanttradeai deploy --agent breakout_gpt -c config/project.yaml --target docker-composeThis writes a deployment bundle under reports/deployments/<agent>/<timestamp>/ with:
docker-compose.ymlDockerfile.env.exampleREADME.mdresolved_project_config.yamldeployment_manifest.json
The happy path is centered on config/project.yaml.
project:
name: "intraday_lab"
profile: "paper"
data:
symbols: ["AAPL"]
start_date: "2022-01-01"
end_date: "2024-12-31"
timeframe: "1d"
test_start: "2024-09-01"
test_end: "2024-12-31"
features:
definitions:
- name: "rsi_14"
type: "technical"
params: { period: 14 }
agents:
- name: "paper_momentum"
kind: "model"
mode: "paper"
model:
path: "models/promoted/aapl_daily_classifier"For the full shape, field reference, and supported agent modes, see Project YAML.
| Workflow | Output directory | Typical artifacts |
|---|---|---|
| Research | runs/research/<timestamp>_<project>/ |
resolved_project_config.yaml, runtime YAML snapshots, summary.json, metrics.json, backtest artifacts |
| Agent backtest | runs/agent/backtest/<timestamp>_<agent>/ |
resolved_project_config.yaml, summary.json, metrics.json, decisions.jsonl, backtest files |
| Agent paper | runs/agent/paper/<timestamp>_<agent>/ |
resolved_project_config.yaml, summary.json, metrics.json, decisions.jsonl, executions.jsonl, runtime YAML snapshots |
| Agent live | runs/agent/live/<timestamp>_<agent>/ |
resolved_project_config.yaml, summary.json, metrics.json, decisions.jsonl, executions.jsonl, runtime streaming/risk/position-manager YAML snapshots |
This makes it easier to compare runs, audit what actually executed, and reuse winning configurations.
To compare local runs directly from the CLI, use the metrics-aware scoreboard:
poetry run quanttradeai runs list --scoreboard
poetry run quanttradeai runs list --scoreboard --sort-by net_sharpe
poetry run quanttradeai runs list --type agent --mode live --scoreboard --sort-by total_pnlconfig/project.yaml is the recommended path for new work. Legacy workflows remain available for compatibility, especially for saved-model backtests and the older live trading loop.
poetry run quanttradeai fetch-data -c config/model_config.yaml
poetry run quanttradeai train -c config/model_config.yaml
poetry run quanttradeai evaluate -m <model_dir> -c config/model_config.yaml
poetry run quanttradeai backtest-model -m <model_dir> -c config/model_config.yaml -b config/backtest_config.yaml
poetry run quanttradeai live-trade -m <model_dir> -c config/model_config.yaml -s config/streaming.yaml
poetry run quanttradeai validate-configImportant boundary:
agent run --mode paperfor project-definedmodelagents compiles runtime config fromconfig/project.yamlagent run --mode livefor project-definedrule,model,llm, andhybridagents compiles runtime config fromconfig/project.yamllive-tradestill uses the legacy runtime YAML files directly
poetry install --with dev
make format
make lint
make testSee CONTRIBUTING.md.
MIT. See LICENSE.