Persistent Domain Intelligence for the CLI Coding Tools You Already Use
Build AI agents that accumulate domain expertise serving your decisions. AGET provides session continuity, shared memory architecture, and governance patterns across Claude Code, Codex CLI, Gemini CLI—through an open standard. Zero infrastructure required.
Solve: Lost context between sessions, knowledge that resets daily, agents that can't learn from each other, deployment confidence across your fleet.
AGET enables AI agents that build persistent domain knowledge serving human decisions—with session continuity, shared learning, and governed autonomy across CLI tools. Think of it as the knowledge layer for your AI team.
- Persistent Domain Knowledge - Agents accumulate expertise that compounds across sessions
- Session Continuity - Pick up where you left off with structured memory architecture
- Shared Learning - Propagate insights across your fleet (
.aget/evolution/) - Lifecycle Governance - Gated releases, contract testing, deployment verification
- Requirements-Driven - Human-level requirements ground testable specifications
- Universal CLI Compatibility - Works with Claude Code, Codex CLI, Gemini CLI
- Open Standard - AGENTS.md enables ecosystem innovation
- Hook-Ready - Platform-native lifecycle automation via .claude/hooks/
AGET brings formal requirements engineering to agent fleet management. Requirements define principal intent; specifications define testable contracts (two-level model).
EARS Patterns - Write unambiguous specifications:
- Ubiquitous: "The system SHALL always..."
- Event-driven: "WHEN [trigger] the system SHALL..."
- State-driven: "WHILE [condition] the system SHALL..."
- Optional: "WHERE [feature enabled] the system SHALL..."
- Conditional: "IF [condition] THEN the system SHALL..."
Contract Testing - Validate deployments before production (7-30 tests per agent)
Validation Framework - Every specification includes formal validation tests
Result: Deploy with confidence—formal specs → validated implementations → verified deployments.
See template-spec-engineer-aget for specification engineering capabilities.
- Individual Practitioners building persistent AI expertise in a specific domain
- Power Users coordinating specialized agents across domains
- AI Tool Builders wanting a governance layer for agent deployments
12 Archetypes — Each with specialized skills and ontology (v3.5.0+, updated v3.11.1):
| Template | Archetype | Key Skills | Primary Use Case |
|---|---|---|---|
| template-supervisor-aget | supervisor | broadcast-fleet, review-agent, escalate-issue, create-aget | Fleet coordination (recommended start) |
| template-worker-aget | worker | execute-task, report-progress | Task execution, foundation |
| template-developer-aget | developer | run-tests, lint-code, review-pr | Code development |
| template-advisor-aget | advisor | assess-risk, recommend-action | Persona-based guidance |
| template-consultant-aget | consultant | assess-client, propose-engagement | Strategic engagements |
| template-analyst-aget | analyst | analyze-data, generate-report | Data analysis |
| template-architect-aget | architect | design-architecture, assess-tradeoffs | System design |
| template-researcher-aget | researcher | search-literature, document-finding | Research workflows |
| template-operator-aget | operator | handle-incident, run-playbook | Operations/DevOps |
| template-executive-aget | executive | make-decision, review-budget | Executive advisory |
| template-reviewer-aget | reviewer | review-artifact, provide-feedback | Quality review |
| template-spec-engineer-aget | spec-engineer | validate-spec, generate-requirement | Requirements engineering |
All templates include: 15 universal skills (wake-up, wind-down, check-health, study-up, record-lesson, etc.) + archetype-specific skills above.
Each archetype includes a formal ontology defining domain vocabulary:
Vocabulary → Specification → Implementation
| Archetype | Concepts | Clusters | Example Terms |
|---|---|---|---|
| Developer | 10 | 4 | Codebase, TestSuite, PullRequest, LintRule |
| Supervisor | 8 | 3 | Fleet, Agent, Escalation, Broadcast |
| Advisor | 6 | 2 | Risk, Recommendation, Persona |
| Architect | 7 | 3 | Architecture, Component, Tradeoff |
Benefits: Precision (formal vocabulary prevents ambiguity), Consistency (same concepts across instances), Extensibility (add domain-specific terms).
See ontology/ONTOLOGY_{archetype}.yaml in any template.
Recommended: Begin with a supervisor agent, then use
/aget-create-agetto build your fleet.
# Clone the supervisor template
gh repo clone aget-framework/template-supervisor-aget my-supervisor
cd my-supervisor
# Configure identity
vim .aget/version.json # Set agent_name, domain
# Verify deployment
python3 -m pytest tests/ -v # Contract tests must pass# In your CLI tool (Claude Code, Codex CLI, Gemini CLI)
cd my-supervisor/
# Use the supervisor's /aget-create-aget skill to create agents
/aget-create-aget
# Follows 9 SOP gates: ontology → template → identity → deployEnable your agents to succeed at making their principals successful.
Success cascades from framework quality to agent effectiveness to principal outcomes:
Better Principal Outcomes ← Faster decisions, deeper analysis, fewer knowledge gaps
↑
Principal Success ← Practitioners deliver better work with accumulated domain expertise
↑
Agent Success ← Effective augmentation with persistent knowledge and deployment confidence
↑
Framework Quality ← AGET ensures governance, learning, compliance
Goal: Not just "manage agents" but enable principal success through accumulated domain intelligence and deployment confidence.
AGET doesn't replace your CLI tools—it coordinates them. Works alongside Claude Code, Codex CLI, Gemini CLI to bring fleet-level capabilities: version control, shared learning, lifecycle governance.
Complementary, not competitive: AGET + CLI Tools work together to enable your agents.
AGENTS.md open standard means anyone can adopt, extend, or integrate. No vendor lock-in, no proprietary formats—just universal CLI compatibility and portable knowledge.
Separates framework knowledge (portable) from domain knowledge (specific):
| Layer | Location | Purpose | Example Content |
|---|---|---|---|
| Framework | .aget/ |
Process patterns, learnings | .aget/evolution/L*.md (portable to any domain) |
| Agent Type | Template | Role-specific capabilities | Advisor personas, worker task patterns |
| Instance | .aget/version.json |
Agent identity, config | agent_name, aget_version, domain |
| Memory | .memory/ |
Engagement state (advisors) | .memory/clients/{id}/ relationship history |
| Domain | Root | Principal's work product | sessions/*.md, knowledge/*.md |
Design principle: Framework knowledge (.aget/) is portable across domains. Domain knowledge (root) is principal-owned and specific.
Every agent, feature, and release is formally specified and validated:
Specification Format:
# .aget/specs/EXAMPLE_SPEC_v1.0.yaml
requirements:
R1_capability_check:
statement: "Agent SHALL validate version.json on startup"
validation:
test: "Contract test test_version_file_exists()"
threshold: "PASS required for deployment"Contract Testing:
- 7-30 pytest-based tests per agent
- Validates: Identity, configuration, capabilities, compliance
- Runs: Pre-commit, pre-deployment, CI/CD
- Example:
pytest tests/test_contract.py -v
Version Compliance:
// .aget/version.json
{
"agent_name": "my-research-agent",
"aget_version": "3.12.0",
"instance_type": "AGET",
"template": "researcher",
"migration_history": [
"v2.9.0 -> v2.10.0: 2025-12-13",
"v2.10.0 -> v2.11.0: 2025-12-24",
"v2.11.0 -> v2.12.0: 2025-12-25",
"v2.12.0 -> v3.0.0: 2025-12-28",
"v3.0.0 -> v3.1.0: 2026-01-04",
"v3.5.0 -> v3.6.0: 2026-02-21",
"v3.6.0 -> v3.7.0: 2026-03-02",
"v3.7.0 -> v3.8.0: 2026-03-08",
"v3.8.0 -> v3.9.0: 2026-03-15",
"v3.9.0 -> v3.10.0: 2026-03-21",
"v3.10.0 -> v3.11.0: 2026-03-28
"v3.11.0 -> v3.11.1: 2026-04-04""
]
}Migration history tracked, compliance validated via contract tests.
Open standard configuration works across all CLI coding tools:
# AGENTS.md
version: "1.0"
agent:
name: "my-research-agent"
type: "researcher"
domain: "market_analysis"
capabilities:
- search_literature
- document_finding
- analyze_dataNo vendor lock-in: Same configuration file works with Claude Code, Codex CLI, Gemini CLI. Agent portability preserved.
┌─────────────────────────────────────────────────────────┐
│ PRINCIPALS │
│ (Domain Practitioners) │
└─────────────────────────────────────────────────────────┘
↑
┌─────────────────────────────────────────────────────────┐
│ AI CODING AGENTS │
│ (Workers, Advisors, Supervisors, Consultants) │
└─────────────────────────────────────────────────────────┘
↑ ↑
┌───────────┐ ┌──────────────┐
│ AGET │ │ CLI TOOLS │
│ │←→│ │
│ • Version │ │ • Claude Code│
│ • Learning│ │ • Codex CLI │
│ • Specs │ │ • Gemini CLI │
│ • Govern │ │ │
└───────────┘ └──────────────┘
↑ ↑
AGENTS.md Universal CLI
(open std) Compatibility
Complementary architecture: AGET provides governance layer. CLI tools provide execution environment. Together they enable confident multi-agent deployment.
AGET templates form a deliberate authority hierarchy — agents have different levels of autonomy and accountability:
Supervisor ─── Fleet coordination, escalation, cross-agent learning
↑
Advisor ─── Read-only guidance (5 personas: teacher, mentor, consultant, guru, coach)
↑
Worker ─── Task execution, the foundation archetype for all agents
10 specialized archetypes extend this hierarchy with domain-specific capabilities (developer, analyst, architect, researcher, operator, executive, reviewer, spec-engineer, consultant). Each inherits from worker and can operate alongside advisors or under supervisor coordination.
The supervisor template manages fleet-level operations: agent review, learning propagation, issue escalation, and cross-agent coordination. Recommended starting point — start with a supervisor, then use it to create your fleet agents.
Track agent identity, manage upgrades, ensure compliance:
// .aget/version.json
{
"agent_name": "my-research-agent",
"aget_version": "3.12.0",
"instance_type": "AGET",
"template": "researcher",
"domain": "market_analysis"
}Version progression: v2.5 → v2.6 → v2.7 → v2.8 → v2.9 → v2.10 → v2.11 → v2.12 → v3.0.0 → v3.1.0 → v3.2.0 → v3.2.1 → v3.3.0 → v3.4.0 → v3.5.0 → v3.6.0 → v3.7.0 → v3.8.0 → v3.9.0 → v3.10.0 → v3.11.0 → v3.11.1 → v3.12.0 Migration history tracked, contract tests enforce compliance.
Propagate insights across your fleet:
# .aget/evolution/L315_pattern_discovered.md
## Problem: Agents duplicated work (no shared context)
## Learning: Centralize learnings in .aget/evolution/
## Protocol: Pattern deployment across fleetCollective intelligence: Fleet gets smarter together, not individually.
Gated releases with human supervision:
- Incremental go/no-go decision points
- Contract testing (deployment verification)
- Evidence-based planning
- Honest gap recording (prediction accuracy is metric)
AGENTS.md configuration works across:
- ✅ Claude Code (primary)
- ✅ Codex CLI (supported)
- ✅ Gemini CLI (supported)
⚠️ Cursor, Aider (experimental)- ✅ Any CLI tool supporting configuration files
No tool lock-in. No vendor-specific formats. Just open standards.
Pain: "I explain the same context every session"
Your AI agent forgets everything between sessions. You waste the first 10 minutes re-explaining project history, decisions made, and work in progress.
AGET's lifecycle protocols preserve context across sessions automatically. Wake-up loads previous state; wind-down captures decisions for next session.
See it in action: Session protocols
Pain: "My agents don't know about each other"
You have 5 agents across 5 projects. They duplicate work, contradict each other, and can't share what they've learned.
AGET's fleet management coordinates agents with supervisor/worker patterns, shared configuration, and learning propagation across your entire agent fleet.
See it in action: Fleet templates
Pain: "My AI doesn't get smarter over time"
You've been working with AI tools for months. They've helped solve hundreds of problems. But they haven't learned anything — no patterns captured, no lessons retained, no expertise compounding.
AGET's evolution tracking captures decisions, learnings, and patterns as permanent, searchable knowledge. Your agent gets more effective with every session.
See it in action: Learning architecture
Pain: "I'm locked into one tool"
Cursor today, Claude Code tomorrow, Windsurf next month. Your agent configuration shouldn't be hostage to vendor choice.
AGET works across all major CLI coding tools with a single AGENTS.md configuration. No vendor lock-in.
Supported: Claude Code, Codex CLI, Gemini CLI
Pain: "I can't explain what the AI decided"
Compliance asks: "Why did the AI make that choice?" You have no answer—just an endless chat transcript.
AGET's gated workflows and evolution tracking create an auditable trail of decisions, rationale, and outcomes. Every significant decision documented with PROJECT_PLAN pattern.
See it in action: Governance patterns
- Versioning Guide - Version system and compatibility
- Upgrade Guide - Safe upgrade procedures
- Releases - Release process and quality standards
- Version History - Complete release timeline
- Template Structure Guide - Understanding agent templates
- Layer Architecture - 5-layer knowledge architecture
AGET: Human-supervised coordination (not autonomous) Them: Autonomous execution runtimes Difference: AGET brings governance and learning to CLI tools you already use
AGET: Lightweight, zero-infrastructure (markdown + git) Them: Cloud platforms, observability infrastructure Difference: AGET works locally with no servers, no overhead
AGET: Fleet coordination, shared learning, version control Them: Single-agent, no versioning, no cross-agent learning Difference: AGET enables coordination across tools and agents
- ✅ Issue Governance v2.1.0: Triage, lifecycle state machine, Issue Forms — 3 new capabilities, 9 EARS requirements, 54 SKOS vocabulary terms
- ✅ Homepage Rewrite: README 169→104 lines, Quick Start at line 14, pain-point framing, R-HOM-001 6/7 conformance
- ✅ Epistemic Parameterization:
study_topic.py--purpose and --domain-keywords for agent-aware KB search - ✅ First Deprecation Cycle: 3 deprecated items removed (capture verb, study-up script, record-nugget skill)
- ✅ Release State Management: SOP state machine, BLOCKING deployment_monitor, tag_release.py automation
Released: 2026-04-04
- Renamed
aget_housekeeping_protocol.py→health_check.py - Renamed
study_up.py→study_topic.py - New:
tag_release.py
Released: 2026-03-28
- ✅ Requirements Layer: Human-level requirements directory (
requirements/) with REQUIREMENTS_FORMAT.md and REQ-REL exemplar — first formal requirements artifact per L742 two-level model - ✅ Hook Adoption:
.claude/hooks/scaffolded across all 12 templates with HOOK_ADOPTION_GUIDE — ADR-008 Generator level infrastructure - ✅ Skill Conformance: 17 skill instructions remediated for L736 conformance — assert-before-verify anti-pattern eliminated
- ✅ Archetype Governance:
governance_intensityfield in all 12 template AGENTS.md (Rigorous/Standard/Lightweight) - ✅ Release Quality: Pre-push hook, study_up.py fix, "sanity→health" terminology, homepage roadmap, phantom cleanup
- ✅ Requirements-Spec Traceability: AGET_RELEASE_SPEC v1.11.0 — first bidirectional requirements ↔ spec grounding
Released: 2026-03-21
- ✅ 3-Layer Structural Enforcement: MUST-invoke directives for
/aget-create-projectand/aget-file-issue; gate completion requires plan update + commit; skill completion signals - ✅ Dual-Repo Sync Governance: SOP Phase -0.5 governs private→public content sync with validators and SYNC_MANIFEST tracking
- ✅ Skill Naming Reconciliation: 3 renames across 14 repos —
captureverb retired from Learning family, unified underrecord+study - ✅ SKILL_SPEC_TEMPLATE.yaml: Deployed to all 12 templates
- ✅ Template Hygiene: VERSION, setup.py classifier, SECURITY.md updates
Released: 2026-03-15
- ✅ Phase -1: Release Readiness: 3 sub-phases (B.1 Assessment, B.2 Conformance Audit, B.3 Principal Approval) with 12-item checklist — governs Gap B transition
- ✅ Phase 0.85: Deliverable Conformance Check: SHALL violations are BLOCKING
- ✅ Gate 0: Spec Verification (MP-1): Mandatory spec verification sweep before implementation begins
- ✅ Version-Bearing Enforcement: version_bump.py extended to 5/5 artifact types (version.json, README.md, AGENTS.md, codemeta.json, CITATION.cff) with
--checkvalidation mode - ✅ GOVERNANCE_PRINCIPLES.md: First public publication (6 Tier 1 + 5 Tier 2 meta-principles)
- ✅ aget-enhance-spec Fixes: Phase 6 consistency (#418), phantom spec reference (#419)
Released: 2026-03-08
- ✅ Meta-Principle Codification: GOVERNANCE_PRINCIPLES.md v1.1.0 — 6 Tier 1 + 5 Tier 2 meta-principles answering "what rules govern the rules?"
- ✅ Structural Aesthetics: Third design principle integrated into DESIGN_PHILOSOPHY, MISSION, homepage
- ✅ Skill Customization Detection:
pre_sync_check.pydetects, classifies, and reports skill customizations before upgrade - ✅ PROJECT_PLAN Validator:
validate_project_plan.pyprevents prompt-as-plan anti-pattern - ✅ Private-First Issue Routing: AGET_ISSUE_GOVERNANCE_SPEC v2.0.0 — all issues route to private repo first
- ✅ Template Governance: All 12 templates updated with
.claude/scaffolding and governance patterns - ✅ New Skills:
aget-enhance-specv1.1.0 (spec lifecycle),aget-expand-ontologyv1.0.0 (SKOS expansion)
Released: 2026-03-02
- ✅ Content Integrity Validation: CONTENT_INTEGRITY_SPEC v1.0.0 — 38 EARS requirements covering 8 dimensions of content claim drift
- ✅ Evidence-Based Positioning: 15 READMEs + 2 specs reframed to lead with demonstrated capabilities
- ✅ Skill Verb Vocabulary: 4 skill renames aligned to approved verbs (
aget-studyup→aget-study-up) - ✅ SOP Lifecycle Management: AGET_SOP_SPEC v1.2.0 with Draft/Active/Deprecated states
- ✅ Specification Enhancement Lifecycle: SKILL-041 + SOP for governed spec creation/updates
- ✅ 15 Universal Skills: 3-way mismatch resolved — spec/README/deployed aligned at 15 universal
Released: 2026-02-21
- ✅ Release Observability: 5 scripts — validation_logger, run_gate, release_snapshot, propagation_audit, health_logger
- ✅ Content Integrity: 6 dimensions of claim-vs-reality drift fixed across all repos
- ✅ Canonical Scripts v2.0.0: C3+C1 hybrid architecture (config-driven + hook-based extensions)
- ✅ Universal Skills: 14 skills (added aget-studyup)
- ✅ Vocabulary Precision: 4 compliance behavioral terms (VOCABULARY_SPEC v1.16.0)
- ✅ Platform Claims: Claude Code, Codex CLI, Gemini CLI (Cursor/Aider → Experimental)
- ✅ Conformance Tool v1.3.0: 12/12 templates CONFORMANT at deep depth
Released: 2026-02-14
- ✅ Archetype Ontologies: 12 ONTOLOGY_{archetype}.yaml files with 87 domain concepts
- ✅ Archetype Skills: 26 archetype-specific skills (2-3 per archetype)
- ✅ Universal Skills: 13 skills shared across all templates
- ✅ Skill Specifications: EARS-compliant specs for all 39 skills
- ✅ Ontology-Driven: Vocabulary → Specification → Instance pattern (L486)
Released: 2026-01-18
- ✅ Session Protocol Enhancements: Re-entrancy guard, calendar awareness, sanity gate
- ✅ Cross-CLI Validation: Tested on Claude Code, Codex CLI, Gemini CLI
- ✅ Governance Formalization: Release, behavioral, and artifact governance patterns
- ✅ Spec-First Documentation: AGET_IDENTITY_SPEC.yaml, AGET_POSITIONING_SPEC.yaml
- ✅ New SOPs: L-doc creation, Enhancement Request, PROJECT_PLAN archival
- ✅ Template Infrastructure: sops/ with SOP_escalation.md in all 12 templates (R-TEMPLATE-001)
- ✅ codemeta.json + CITATION.cff: Standard software metadata
Released: 2026-01-10
- ✅ Shell Orchestration: aget.zsh, profiles.zsh (5 CLI backends)
- ✅ SKOS-Compliant Vocabularies: All 12 templates have ontologies (R-REL-015)
- ✅ Ontology-Driven Creation (L481, L482): Specs drive instances, not follow them
- ✅ AGET_EXECUTABLE_KNOWLEDGE_SPEC.md: Executable knowledge framework
- ✅ AGET_EVOLUTION_SPEC.md: Evolution entry standardization
- ✅ 18 New L-docs: L451-L503 learnings documented
Released: 2026-01-04
- ✅ L444 Remediation: Version consistency across all version-bearing files
- ✅ Coherence Testing: New V-tests for AGENTS.md, manifest.yaml verification
- ✅ SOP Update: Gate 7 V-tests for version inventory coherence
Released: 2026-01-04
- ✅ 7 New Specifications: Testing, Release, Documentation, Organization, Error, Security, Project Plan
- ✅ Naming Conventions Expansion: 4 → 10 categories (Categories F-J)
- ✅ Specification Index System: INDEX.md (30 specs) + REQUIREMENTS_MATRIX.md (78 CAP requirements)
- ✅ 6 New Validators: License, Agent Structure, Release Gate, L-doc Index, SOP, Homepage
- ✅ Standardized Spec Headers: YAML frontmatter with version, status, dependencies
- ✅ Learnings: L439, L440, L443
Released: 2026-01-04
- ✅ Cross-CLI Infrastructure: Agent-agnostic scripts with --json output
- ✅ Complete Session Lifecycle: wake up → sanity check → wind down
- ✅ Verification Architecture: Source-verified constants, enforcement testing
- ✅ L-doc Format v2: Cross-agent discovery, adoption tracking
- ✅ Fleet Validation Tooling: validate_fleet.py, version_sync.py
- ✅ Workflow Automation: L-doc to GitHub Issue, cascade to SOP
Released: 2025-12-28
- ✅ 5D Directory Structure: persona/, memory/, reasoning/, skills/, context/
- ✅ Instance Type System: aget (advisory), AGET (action-taking), template
- ✅ Template field: Replaces deprecated roles array
- ✅ All 6 templates migrated to v3.0 architecture
- ✅ 731 contract tests passing across framework
- ✅ Breaking changes: roles removed, manifest_version 3.0
Released: 2025-12-25
- ✅ Complete capability composition system (5 specs, 3 validators, 80 tests)
- ✅ Template manifest system for agent composition (manifest.yaml)
- ✅ Fleet migration enablement (6 pilots validated)
- ✅ Governance exemplar enforcement (L367)
Released: 2025-12-24
- ✅ Memory Architecture (L335): 6-layer information model
- ✅ L352 Traceability Pattern: Requirement-to-test traceability
- ✅ R-PUB-001 Public Release Completeness (8 requirements)
- ✅ Version migration protocol (R-REL-006)
Released: 2025-12-13
- ✅ 6 agent type specifications
- ✅ Executive Advisor pattern (5W+H knowledge architecture)
- ✅ Theoretical grounding protocol (L332)
- Roadmap: View planned enhancements
- GitHub Issues: File bugs, request features
- GitHub Releases: View releases, changelogs
- CLI tools are ecosystem partners (not competitors)
- Open standards enable collective innovation (not ownership advantage)
- Success measured by agent effectiveness (not market capture)
- Positive direction leads (affirmative framing, not negative contrast)
- Beauty signals health; ugliness signals problems worth investigating
- Not all ugliness is failure — some is the cost of evolution
- Naming, structure, and ceremony should feel inevitable (not forced)
- Artifacts should invite engagement (not just pass validation)
- Agents augment human capability (not replace)
- Human judgment remains central (gated releases, incremental go/no-go)
- Learning investment valued (extract systematic insights)
- Decisions grounded in data (not assumptions)
- Learnings extracted systematically (L-series evolution documents)
- Honest gap recording (prediction accuracy is metric)
Apache 2.0 License - See LICENSE for details
| Role | Member |
|---|---|
| Creator & Lead Maintainer | @gmelli |
See MAINTAINERS.md for governance details.
Built with:
- Claude Code (Anthropic) - AI coding assistant
- Universal CLI Compatibility - Works across Claude Code, Codex CLI, Gemini CLI
- Open Standards - AGENTS.md specification
- Community Contributors - Thank you for making AGET better
AGET Framework - Persistent domain intelligence for governed agentic work
Build AI agents that accumulate domain expertise serving your decisions
Maintained by @gmelli • gabormelli.com