Built with Kiro - JuntoAI A2A was primarily developed using Kiro, the AI-powered IDE. Kiro's specs-driven workflow, steering files, and automation hooks shaped every feature in this repo. Learn more about Kiro β
JuntoAI A2A is a config-driven scenario engine and universal protocol-level execution layer for professional negotiations. It is not a chatbot. Drop a JSON scenario file into the repo, and autonomous AI agents - powered by Gemini, Claude, or any LiteLLM-supported provider - negotiate in real time while a Glass Box UI streams their inner reasoning and public messages to your browser. Toggle a single hidden variable (a secret competing offer, a compliance constraint) and watch the deal outcome shift. A2A proves that AI negotiation is controllable, observable, and reproducible.
- π Table of Contents
- ποΈ Architecture
- π Quick Start
- βοΈ Local Battle Arena
- βοΈ Environment Configuration
- π€ Connect Your Own Agents
- π Leaderboard
- π οΈ Developing with Kiro
- π€ Contributing
- π License
- π Changelog
JuntoAI A2A is a monorepo with three top-level directories:
| Directory | Stack | Purpose |
|---|---|---|
backend/ |
Python 3.11 Β· FastAPI Β· LangGraph | Scenario orchestration, AI agent execution, SSE streaming |
frontend/ |
Next.js 14 Β· Tailwind CSS | Glass Box UI, Arena Selector, Outcome Receipt |
infra/ |
Terraform Β· Terragrunt | GCP infrastructure (Cloud Run, Firestore, Vertex AI) |
Data flow: A scenario JSON config defines the agents, toggles, and negotiation parameters. The FastAPI orchestrator loads the config, initializes a LangGraph state machine, and runs the agents turn-by-turn. Each agent's inner thoughts and public messages stream as Server-Sent Events (SSE) to the Next.js Glass Box UI in real time.
Scenario JSON β FastAPI Orchestrator β LangGraph State Machine β AI Agents (Gemini/Claude) β SSE Stream β Next.js Glass Box UI
graph LR
subgraph Monorepo
direction TB
BE["backend/<br/>Python Β· FastAPI Β· LangGraph"]
FE["frontend/<br/>Next.js Β· Tailwind CSS"]
INFRA["infra/<br/>Terraform Β· Terragrunt"]
end
CONFIG["π Scenario JSON Config<br/>(agents, toggles, params)"]
ORCH["βοΈ FastAPI Orchestrator"]
LANG["π LangGraph State Machine"]
AGENTS["π€ AI Agents<br/>(Gemini / Claude / LiteLLM)"]
SSE["π‘ SSE Stream"]
UI["π₯οΈ Glass Box UI<br/>(Next.js)"]
CONFIG --> ORCH
ORCH --> LANG
LANG --> AGENTS
AGENTS --> LANG
LANG --> SSE
SSE --> UI
BE -.-> ORCH
FE -.-> UI
INFRA -.-> |"GCP Cloud Run<br/>Firestore Β· Vertex AI"| BE
INFRA -.-> |"GCP Cloud Run"| FE
Get the full stack running locally in one command. No API keys, no .env file, no GCP credentials.
Prerequisites:
- Docker and Docker Compose (v2.24+)
Steps:
- Clone the repo:
git clone https://github.com/Juntoai/a2a.git
cd a2a- Start the stack:
docker compose upThat's it. Docker Compose spins up Ollama (auto-pulls llama3.1), the FastAPI backend, and the Next.js frontend. First run takes a few minutes for the model download (~4GB); subsequent runs are instant.
Prefer OpenAI or Anthropic? Copy the env template and add your key:
cp .env.example .env # Edit .env: set LLM_PROVIDER=openai and OPENAI_API_KEY=sk-... docker compose up
When RUN_MODE=local (the default), the entire cloud stack is swapped for lightweight local alternatives - zero GCP dependencies:
| Component | Cloud Mode | Local Mode |
|---|---|---|
| Database | Firestore | SQLite |
| LLM Router | Vertex AI | LiteLLM |
| Auth | Waitlist + tokens | Bypassed |
| Hosting | GCP Cloud Run | Docker Compose |
The same scenario JSON files work in both modes without modification - only the underlying LLM provider and database change.
π Full Local Development Guide β - Docker Compose services, model mapping, default mappings table, environment variables, LiteLLM provider routing, and auth bypass details.
Every configurable value lives in a single .env file at the monorepo root. No .env file is needed for the default Ollama setup - copy .env.example to .env only when you want to override defaults.
Full Environment Variable Reference (click to expand)
| Variable | Required | Default | Description |
|---|---|---|---|
RUN_MODE |
Optional | local |
local for SQLite + LiteLLM, cloud for Firestore + Vertex AI |
ENVIRONMENT |
Optional | development |
Runtime environment label (development, staging, production) |
| Variable | Required | Default | Description |
|---|---|---|---|
LLM_PROVIDER |
Optional | ollama |
LLM backend: openai, anthropic, ollama, or vertexai |
OPENAI_API_KEY |
Conditional | - | Required when LLM_PROVIDER=openai |
ANTHROPIC_API_KEY |
Conditional | - | Required when LLM_PROVIDER=anthropic |
LLM_MODEL_OVERRIDE |
Optional | - | Force every agent to use this single model (ignores scenario model_id) |
MODEL_MAP |
Optional | - | JSON object mapping scenario model_id β provider model name |
| Variable | Required | Default | Description |
|---|---|---|---|
GOOGLE_CLOUD_PROJECT |
Cloud only | - | GCP project ID for Firestore and Vertex AI |
FIRESTORE_EMULATOR_HOST |
Optional | - | Firestore emulator address (e.g. localhost:8080) for local testing |
| Variable | Required | Default | Description |
|---|---|---|---|
NEXT_PUBLIC_SITE_URL |
Optional | https://app.juntoai.org |
Canonical site URL for SEO and metadata |
NEXT_PUBLIC_FIREBASE_API_KEY |
Optional | - | Firebase API key for client-side Firestore |
NEXT_PUBLIC_FIREBASE_PROJECT_ID |
Optional | - | Firebase project ID |
NEXT_PUBLIC_FIREBASE_APP_ID |
Optional | - | Firebase app ID |
BACKEND_URL |
Optional | http://localhost:8000 |
Backend origin for server-side API proxy (never exposed to browser) |
| Variable | Required | Default | Description |
|---|---|---|---|
VERTEX_AI_LOCATION |
Cloud only | us-east5 |
Vertex AI region for Claude models (Gemini uses global endpoint automatically) |
VERTEX_AI_REQUEST_TIMEOUT_SECONDS |
Optional | 60 |
Timeout in seconds for Vertex AI requests |
SCENARIOS_DIR |
Optional | backend/app/scenarios/data |
Directory containing scenario JSON files |
CORS_ALLOWED_ORIGINS |
Optional | http://localhost:3000 |
Comma-separated allowed CORS origins |
APP_VERSION |
Optional | 0.1.0 |
Application version string |
OpenAI:
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-hereAnthropic:
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-key-hereOllama (local, no API key):
LLM_PROVIDER=ollama
# Ensure Ollama is running: ollama serveUse LLM_MODEL_OVERRIDE to make every agent use the same model - great for cost savings during development:
LLM_MODEL_OVERRIDE=gpt-4o-miniWhen set, this overrides all model_id values in every scenario config. Remove it to let each agent use its configured model.
Add a new negotiation scenario by dropping a single JSON file into backend/app/scenarios/data/ - zero code changes. The orchestrator discovers it automatically on startup.
Every scenario JSON file must conform to the ArenaScenario Pydantic model. Top-level fields:
| Field | Type | Description |
|---|---|---|
id |
string | Unique scenario identifier |
name |
string | Display name shown in the Arena Selector |
description |
string | Short scenario description |
agents |
AgentDefinition[] | Array of agent configs (minimum 2) |
toggles |
ToggleDefinition[] | Investor-facing information toggles (minimum 1) |
negotiation_params |
NegotiationParams | Turn limits, agreement threshold, turn order |
outcome_receipt |
OutcomeReceipt | Post-negotiation display metadata |
Each entry in the agents array defines one AI agent:
| Field | Type | Description |
|---|---|---|
role |
string | Unique role identifier (e.g. "Developer", "CTO") |
name |
string | Display name for the agent |
type |
"negotiator" | "regulator" | "observer" |
Agent type - controls output schema |
persona_prompt |
string | System prompt defining the agent's personality and strategy |
goals |
string[] | List of objectives the agent tries to achieve |
budget |
{min, max, target} |
Financial constraints (min β€ max, all β₯ 0) |
tone |
string | Communication style (e.g. "professional and confident") |
output_fields |
string[] | Fields the agent must include in each response |
model_id |
string | LLM model identifier (mapped to provider via MODEL_MAP) |
fallback_model_id |
string (optional) | Fallback model if primary is unavailable |
Complete example: Freelance Rate Negotiation (click to expand)
{
"id": "freelance_rate",
"name": "Freelance Rate Negotiation",
"description": "A freelance developer negotiates their hourly rate with a startup CTO.",
"difficulty": "beginner",
"agents": [
{
"role": "Developer",
"name": "Jordan",
"type": "negotiator",
"persona_prompt": "You are Jordan, a freelance developer with 5 years of experience. Your target rate is $120/hr.",
"goals": ["Achieve hourly rate of $120 or above", "Secure a 3-month minimum contract"],
"budget": {"min": 80, "max": 160, "target": 120},
"tone": "professional and confident",
"output_fields": ["offer", "reasoning", "counter_terms"],
"model_id": "gemini-3-flash-preview"
},
{
"role": "CTO",
"name": "Morgan",
"type": "negotiator",
"persona_prompt": "You are Morgan, CTO of a seed-stage startup. Your budget is $80-100/hr for this role.",
"goals": ["Keep rate at or below $100/hr", "Negotiate a trial period before long commitment"],
"budget": {"min": 60, "max": 120, "target": 90},
"tone": "friendly but budget-conscious",
"output_fields": ["offer", "reasoning", "counter_terms"],
"model_id": "gemini-3-flash-preview"
},
{
"role": "Regulator",
"name": "Compliance Bot",
"type": "regulator",
"persona_prompt": "You enforce fair contracting practices. Flag rates below market minimum ($70/hr) or contracts without clear deliverables.",
"goals": ["Ensure rate is above $70/hr minimum", "Require clear scope of work"],
"budget": {"min": 0, "max": 0, "target": 0},
"tone": "neutral and policy-driven",
"output_fields": ["compliance_status", "warnings", "recommendation"],
"model_id": "gemini-3-flash-preview",
"fallback_model_id": "gemini-3-flash-preview"
}
],
"toggles": [
{
"id": "competing_client",
"label": "Jordan has a competing offer at $140/hr",
"target_agent_role": "Developer",
"hidden_context_payload": {
"competing_offer": true,
"competing_rate": 140,
"details": "You have a signed offer from another client at $140/hr. Use this as leverage."
}
}
],
"negotiation_params": {
"max_turns": 8,
"agreement_threshold": 10,
"turn_order": ["Developer", "Regulator", "CTO", "Regulator"]
},
"outcome_receipt": {
"equivalent_human_time": "~1 week",
"process_label": "Freelance Rate Negotiation"
}
}Save this as backend/app/scenarios/data/freelance-rate.scenario.json, restart the backend, and it appears in the Arena Selector.
Scenario configs use generic model_id values like gemini-3-flash-preview. In local mode, LiteLLM translates these to your provider's models:
- If
LLM_MODEL_OVERRIDEis set, all agents use that single model (ignoresmodel_id). - If
MODEL_MAPis set, eachmodel_idis looked up in the JSON mapping. Example:MODEL_MAP='{"gemini-3-flash-preview": "gpt-4o", "gemini-2.5-pro": "gpt-4o"}' - If neither is set, LiteLLM attempts to route the
model_idstring directly to the configured provider.
Local mode uses LiteLLM as a universal LLM router. LiteLLM supports 100+ providers (OpenAI, Anthropic, Ollama, Azure, Bedrock, Mistral, and more) through a single interface.
To bring your own API key for any LiteLLM-supported provider, set the appropriate environment variable in .env:
# Example: use Mistral
LLM_PROVIDER=mistral
MISTRAL_API_KEY=your-mistral-key
# Example: use Azure OpenAI
LLM_PROVIDER=azure
AZURE_API_KEY=your-azure-key
AZURE_API_BASE=https://your-resource.openai.azure.com/LiteLLM handles the provider-specific API formatting, auth, and model routing - your scenario configs stay unchanged.
Coming Soon - The agent leaderboard is on the roadmap. The infrastructure below describes how it will work.
Run the same scenario with different agent configurations and compare results across four evaluation dimensions:
| Dimension | What It Measures |
|---|---|
| Deal Outcome | Final negotiated terms vs each agent's target values |
| Negotiation Efficiency | Number of turns to reach agreement (fewer = better) |
| Humanization Quality | Natural language fluency, strategic reasoning depth, persuasion techniques |
| Regulator Compliance | Number of compliance warnings received (fewer = better) |
Want to compete? Submit your agent configurations - scenario JSON files with custom persona_prompt values and model_id choices - by opening a PR to backend/app/scenarios/data/. See Connect Your Own Agents for the schema.
Kiro is the AI-powered IDE used to build JuntoAI A2A. Open the monorepo root in Kiro and it automatically reads the project context files - no manual configuration needed.
| Directory | Purpose |
|---|---|
steering/ |
Project context files that Kiro reads automatically |
specs/ |
Feature specifications planned and built through Kiro's requirements-first workflow |
hooks/ |
Automation hooks triggered during development workflows |
Kiro reads these files automatically so AI assistance is tuned to the project's conventions:
| File | Covers |
|---|---|
tech.md |
Technology stack, runtime versions, dependencies |
styling.md |
Tailwind config, brand colors, responsive breakpoints |
testing.md |
pytest + Vitest setup, coverage targets, test structure |
deployment.md |
Terragrunt workflow, Cloud Build pipeline, branch strategy |
product.md |
Product context, user flows, MVP scope, definition of done |
Each numbered directory in .kiro/specs/ represents a feature designed and built through Kiro's requirements-first process:
080_a2a-local-battle-arena- Docker Compose local mode with SQLite + LiteLLM100_structured-agent-memory- Structured memory for agent state110_hybrid-agent-memory- Hybrid memory combining structured and unstructured approaches120_world-class-readme-contributor-hub- This README and contributor experience130_ai-scenario-builder- AI-assisted scenario creation
The .kiro/hooks/ directory contains automation hooks. Currently: spec-release-notes.kiro.hook - generates release notes when a spec is completed.
- New features β Create a spec in
.kiro/specs/using Kiro's requirements-first workflow - Consistent AI assistance β Steering files ensure every contributor gets the same context
- Automation β Hooks handle repetitive tasks like release notes
Kiro is recommended but not required. Contributors can use any IDE - Kiro just provides the richest AI-assisted experience for this repo thanks to the pre-configured context files.
We love contributions! Read the Contributing Guide to get started - it covers the fork-and-PR workflow, local setup, test commands, and everything you need to open your first PR.
All pull requests are automatically tested by GitHub Actions CI. Both backend (pytest) and frontend (Vitest) suites must pass with 70% coverage before merge.
Ways to contribute:
- Scenario configs - Add new negotiation scenarios (JSON-only, no code changes). See Connect Your Own Agents.
- Bug reports - Found something broken? Open an issue with reproduction steps.
- Feature proposals - Have an idea? Open an issue describing the use case.
- Documentation - Improve the README, add examples, fix typos.
- Agent strategies - Share creative persona prompts and negotiation tactics.
Get involved:
- π First contribution? Browse
good first issuefor curated starter tasks. - π¬ Join the community - Ask questions and coordinate with other contributors on WhatsApp.
- π Code of Conduct - Please read our Code of Conduct. We're committed to a welcoming, inclusive community.
Scenario contributions are JSON-only - drop a file in
backend/app/scenarios/data/and open a PR. No code changes required.
This project is licensed under the MIT License.
See CHANGELOG.md for a full history of shipped features and specs.