-
Notifications
You must be signed in to change notification settings - Fork 173
Providers
github-actions[bot] edited this page Apr 18, 2026
·
4 revisions
Multi-provider support for Claude Code, OpenAI Codex CLI, and Google Gemini CLI.
Loki Mode supports three AI providers with different capability levels:
| Provider | Status | Task Tool | Parallel | MCP | Context |
|---|---|---|---|---|---|
| Claude | Full | Yes | Yes (10+) | Yes | 200K |
| Codex | Degraded | No | No | No | 128K |
| Gemini | Degraded | No | No | No | 1M |
Full-featured provider with complete Loki Mode capabilities.
# Install Claude Code CLI
npm install -g @anthropic-ai/claude-code
# Authenticate
claude login| Tier | Model | Use Case |
|---|---|---|
| Planning | claude-opus-4-7 | Architecture, system design (1M context, adaptive thinking) |
| Development | claude-sonnet-4-6 | Implementation, testing (1M context) |
| Fast | claude-haiku-4-5 | Simple tasks, monitoring (200K context) |
# Launch Claude with autonomous permissions
claude --dangerously-skip-permissions
# In Claude:
# "Loki Mode with PRD at ./my-prd.md"- Task Tool - Spawn subagents for parallel work
- Parallel Agents - Up to 10+ concurrent agents
- MCP Integration - Extended tool capabilities
- Extended Thinking - Deep reasoning for complex problems
- 3 Model Tiers - Right-size for each task
# Set as default provider
loki provider set claude
# Or via environment
export LOKI_PROVIDER=claudeDegraded mode with sequential execution only.
# Install Codex CLI
npm install -g @openai/codex-cli
# Authenticate
codex auth| Model | Context | Notes |
|---|---|---|
| gpt-5.3-codex | 128K | Official model for Codex CLI v0.98+ |
# Recommended (v0.98.0+)
codex --full-auto
# Legacy
codex exec --dangerously-bypass-approvals-and-sandbox- No Task tool (sequential only)
- No parallel agents
- No MCP integration
- Single model (uses effort parameter)
# Set as provider
loki provider set codex
# Or via environment
export LOKI_PROVIDER=codex
# Start with Codex
loki start ./prd.md --provider codexCodex uses an effort parameter instead of model tiers:
effort: low -> Quick responses
effort: medium -> Balanced (default)
effort: high -> Thorough analysis
Degraded mode with large context window.
# Install Gemini CLI
npm install -g @google/gemini-cli
# Authenticate
gemini auth| Model | Context | Notes |
|---|---|---|
| gemini-3-pro-medium | 1M | Placeholder name |
# Autonomous mode (verified v0.27.3)
gemini --approval-mode=yolo "Your prompt here"
# Note: -p flag is DEPRECATED - use positional prompts instead- No Task tool (sequential only)
- No parallel agents
- No MCP integration
- Single model
# Set as provider
loki provider set gemini
# Or via environment
export LOKI_PROVIDER=gemini
# Start with Gemini
loki start ./prd.md --provider geminiloki provider show
# Output: Current provider: claudeloki provider list
# Output:
# Available providers:
# claude (installed, default)
# codex (installed)
# gemini (installed)loki provider info claude
# Output:
# Provider: claude
# Status: Full features
# Model: claude-opus-4-7
# Context: 1M tokens
# Capabilities: Task tool, parallel, MCP, adaptive thinking# Persists across sessions
loki provider set codex# Override for single session
loki start ./prd.md --provider geminiClaude: Full support
Spawn up to 10+ parallel subagents for:
- Research tasks
- Code review
- Testing
- Documentation
Codex/Gemini: Not supported
All tasks run sequentially in main context
Claude: Git worktrees + parallel agents
export LOKI_PARALLEL_MODE=true
export LOKI_MAX_PARALLEL_SESSIONS=3Codex/Gemini: Sequential only
Each task completes before next begins
| Provider | Context | Effective Use |
|---|---|---|
| Claude | 200K | Large codebases |
| Codex | 128K | Medium projects |
| Gemini | 1M | Very large files |
When using Codex or Gemini:
- No Parallel Agents - Tasks run sequentially
- No Task Tool - Cannot spawn subagents
- No MCP - Limited to built-in tools
- Single Model - No tier selection
- Longer Execution - Same work takes more time
Loki Mode automatically adjusts when in degraded mode:
- Phases run sequentially instead of parallel
- Code review uses single pass instead of 3-reviewer
- Research tasks inline instead of background
- Complex multi-file changes
- Need parallel execution
- Require code review quality
- Using MCP integrations
- Speed is important
- OpenAI ecosystem preference
- Simpler, focused tasks
- Cost optimization needed
- Sequential workflow acceptable
- Very large context needed
- Google ecosystem preference
- Simple tasks with large files
- Cost optimization needed
loki provider info codex
# Error: Provider 'codex' not installed
# Solution: Install the CLI
npm install -g @openai/codex-cli# Re-authenticate
claude login
codex auth
gemini auth# Check current provider
loki provider show
# Reset to default
loki provider set claude