Skip to content

feat(llm): Add OpenAI Codex (ChatGPT subscription) as LLM provider#744

Closed
Sanjeev-S wants to merge 4 commits intonearai:stagingfrom
Sanjeev-S:feat/openai-codex-provider
Closed

feat(llm): Add OpenAI Codex (ChatGPT subscription) as LLM provider#744
Sanjeev-S wants to merge 4 commits intonearai:stagingfrom
Sanjeev-S:feat/openai-codex-provider

Conversation

@Sanjeev-S
Copy link
Copy Markdown
Contributor

Summary

  • Add openai_codex LLM backend so ChatGPT Pro/Plus subscribers can use IronClaw without a separate API key
  • OAuth device code login (headless-friendly, same session persistence pattern as NEAR AI provider)
  • Native Responses API client with SSE parsing, tool call round-trips
  • Token-refreshing decorator with pre-emptive refresh and retry on auth failure

Usage

LLM_BACKEND=openai_codex cargo run
# First run triggers device code login:
#   1. Open https://auth.openai.com/codex/device
#   2. Enter code: XXXX-XXXX

Commits

  1. Config + OAuth session managerOpenAiCodexConfig, LlmBackend::OpenAiCodex, device code auth flow with token persistence and auto-refresh
  2. Responses API client + token-refreshing decorator — Native client for chatgpt.com/backend-api/codex/responses, SSE stream parsing, atomic token state, best-effort pre-emptive refresh
  3. Wiring — Provider factory, CLI --backend openai_codex, setup wizard integration

Test plan

  • cargo fmt — no changes
  • cargo clippy --all --benches --tests --examples --all-features — zero warnings
  • cargo test --lib — 1878 passed, 0 failed
  • Manual end-to-end: device code login → send message → received response via Responses API

Closes #742

🤖 Generated with Claude Code

@github-actions github-actions bot added scope: channel/cli TUI / CLI channel scope: llm LLM integration scope: setup Onboarding / setup size: XL 500+ changed lines risk: high Safety, secrets, auth, or critical infrastructure contributor: new First-time contributor labels Mar 8, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the application's LLM provider capabilities by integrating OpenAI Codex, which enables users with a ChatGPT subscription (Pro/Plus) to leverage their existing accounts. The implementation includes a robust OAuth device code authentication flow for seamless login, a dedicated client for OpenAI's Responses API to handle chat interactions and tool usage, and an intelligent token-refreshing mechanism to maintain session continuity. This feature aims to reduce friction for a large segment of users by removing the need for a separate API key and providing a more integrated experience.

Highlights

  • New LLM Provider: OpenAI Codex: Added OpenAI Codex as an LLM provider, allowing ChatGPT Pro/Plus subscribers to use the application without a separate API key.
  • OAuth Device Code Login: Implemented OAuth device code login for headless environments, ensuring secure and persistent authentication.
  • Native Responses API Client: Developed a native Responses API client with Server-Sent Events (SSE) parsing and support for tool calls.
  • Token Refreshing Decorator: Introduced a token-refreshing decorator that handles pre-emptive token refreshes and retries on authentication failures.
Changelog
  • .env.example
    • Added openai_codex as a possible LLM backend and included example configuration for it.
  • .gitignore
    • Added .worktrees to the ignore list.
  • src/app.rs
    • Modified the init_llm function to be asynchronous and await the LLM provider chain build.
  • src/cli/mod.rs
    • Introduced a new Login command with an --openai-codex option for initiating the authentication flow.
  • src/cli/snapshots/ironclaw__cli__tests__help_output.snap
    • Updated the CLI help snapshot to reflect the new login command.
  • src/cli/snapshots/ironclaw__cli__tests__long_help_output.snap
    • Updated the CLI long help snapshot to reflect the new login command.
  • src/config/llm.rs
    • Extended the LlmBackend enum with OpenAiCodex, added OpenAiCodexConfig for specific settings, and updated configuration resolution logic to support the new backend.
  • src/config/mod.rs
    • Exported the newly defined OpenAiCodexConfig.
  • src/llm/mod.rs
    • Integrated new modules for OpenAI Codex provider, session management, and token refreshing. Updated the LLM provider creation logic to handle OpenAiCodex asynchronously.
  • src/llm/openai_codex_provider.rs
    • Created a new module implementing the LlmProvider trait for OpenAI Codex, handling API requests, JWT parsing, and SSE event processing.
  • src/llm/openai_codex_session.rs
    • Created a new module for managing OpenAI Codex OAuth 2.0 sessions, including device code flow, token persistence, and auto-refresh.
  • src/llm/token_refreshing.rs
    • Created a new module providing a decorator for OpenAiCodexProvider to handle pre-emptive token refreshing and retry logic on authentication failures.
  • src/main.rs
    • Added command-line argument parsing and execution logic for the new login --openai-codex command.
  • src/setup/wizard.rs
    • Updated the interactive setup wizard to include OpenAI Codex as a selectable LLM provider and integrated its authentication process.
Activity
  • No human activity (comments, reviews, etc.) was provided in the context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces OpenAI Codex (via ChatGPT subscription) as a new LLM provider. The implementation is extensive, covering configuration, authentication, and a robust API client with thoughtful details like token auto-refresh and error handling. However, sensitive session data lacks consistent protection across all supported platforms. A minor inconsistency was also noted in the environment variable documentation.

Comment on lines +558 to +570
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
let perms = std::fs::Permissions::from_mode(0o600);
tokio::fs::set_permissions(&self.config.session_path, perms)
.await
.map_err(|e| {
LlmError::Io(std::io::Error::new(
e.kind(),
format!("Failed to set permissions: {}", e),
))
})?;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The application attempts to set restrictive permissions (0o600) on the session file containing sensitive OAuth tokens, but this is only implemented for Unix-like systems via #[cfg(unix)]. On other platforms, such as Windows, the file may be created with default permissions, potentially allowing other users on the same machine to read the sensitive tokens. This violates the principle of secure data handling for sensitive credentials.

Comment thread .env.example
# LLM_BACKEND=openai_codex
# OPENAI_CODEX_MODEL=gpt-5.3-codex # default
# OPENAI_CODEX_CLIENT_ID=app_EMoamEEZ73f0CkXaXp7hrann # override (rare)
# OPENAI_CODEX_AUTH_URL=https://auth.openai.com # override (rare)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the implementation in src/config/llm.rs, it would be helpful to also document the OPENAI_CODEX_API_URL environment variable here. The code allows overriding the API base URL, but it's not mentioned in this example file, which could be confusing for users trying to configure a proxy.

# OPENAI_CODEX_AUTH_URL=https://auth.openai.com  # override (rare)
# OPENAI_CODEX_API_URL=https://chatgpt.com/backend-api/codex # override (rare)

@Sanjeev-S Sanjeev-S force-pushed the feat/openai-codex-provider branch 2 times, most recently from 5fbfd22 to de1bb7a Compare March 8, 2026 23:52
Copy link
Copy Markdown
Collaborator

@zmanian zmanian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good implementation. Separate provider is justified -- targets a completely different API surface (Responses API at chatgpt.com with OAuth device code auth). Follows existing LlmProvider trait pattern correctly. Thorough test coverage.

Three blocking issues:

  1. src/llm/CLAUDE.md not updated -- Project rules require updating specs when adding new behavior. The LLM module spec needs entries for the new provider, files, and the async build_provider_chain() change.

  2. Hardcoded /tmp/ path in test -- token_refreshing.rs uses /tmp/test-codex-session.json. Project requires tempfile crate for test files.

  3. build_provider_chain() sync-to-async is a breaking API change -- Consider making only the openai_codex path async within the function body, or document why the signature change is necessary.

Non-blocking:

  • Debug-level logging of device code response body includes sensitive auth data
  • generate_pkce() function is dead code (defined, tested, never called)
  • request_timeout_secs from config is ignored (hardcoded 300s)
  • Missing Retry-After header parsing for 429 responses
  • Missing strict-mode schema normalization for tool definitions

Copy link
Copy Markdown
Member

@ilblackdragon ilblackdragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Solid implementation — clean architecture following existing decorator patterns, good test coverage (25+ unit tests), and well-integrated into the setup wizard. A few items to address:

Critical

  1. Hardcoded /tmp/ path in test (src/llm/token_refreshing.rs)

    session_path: std::path::PathBuf::from("/tmp/test-codex-session.json"),

    Project rules require tempfile::tempdir() — this will collide in parallel test runs. The session tests already do it correctly.

  2. generate_pkce() is dead code (src/llm/openai_codex_session.rs) — never called since the device code flow gets the PKCE pair from the server. Remove or #[allow(dead_code)] with a TODO for the browser PKCE fallback.

High

  1. refresh_tokens() uses .json() but initial token exchange uses .form() (openai_codex_session.rs) — Auth0/OpenAI token endpoints expect application/x-www-form-urlencoded for all grant types. The refresh call should use .form() to be consistent and avoid breakage with stricter server validation.

  2. Image attachments silently dropped (openai_codex_provider.rs convert_message) — msg.images is ignored for user messages. The Responses API supports image_url content parts. At minimum log a warning; ideally convert them like nearai_chat.rs does.

  3. OpenAiCodexSession stores tokens as plain StringDebug is correctly redacted, but the struct is Clone + pub so tokens can leak through other paths. Consider removing the Clone derive or making the struct pub(crate).

Medium

  1. Silent fallback on client builder failure (openai_codex_session.rs:1640)

    .build().unwrap_or_else(|_| Client::new())

    If the builder fails (TLS error), the fallback client won't have configured headers/timeout, causing confusing auth failures. Propagate the error instead.

  2. set_model() duplicated in both OpenAiCodexProvider and TokenRefreshingProvider — the decorator should delegate to self.inner.set_model().

  3. list_models() returns empty vec — means setup wizard model selection won't show models. Add a comment explaining this is expected for subscription-based access.

Low

  1. .gitignore change (.worktrees) is unrelated — should be a separate commit.

  2. include: ["reasoning.encrypted_content"] in request body is reasoning-model-specific — could cause issues with non-reasoning models. Consider making it conditional.

Positives

  • Decorator pattern (TokenRefreshingProvider) is clean and consistent with RetryProvider/CircuitBreakerProvider
  • Device code flow UX is clear with verification URL and code display
  • renewal_lock mutex properly prevents thundering herd on token refresh
  • Session persistence follows the NearAI pattern — consistent UX
  • Comprehensive SSE parser with good edge case coverage (multiple tool calls, error events, [DONE] marker)

@henrypark133 henrypark133 changed the base branch from main to staging March 10, 2026 02:19
Sanjeev-S added a commit to Sanjeev-S/ironclaw that referenced this pull request Mar 10, 2026
Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Restore sync `build_provider_chain()` for backward compat; add async variant
  `build_provider_chain_async()` for codex (which needs async OAuth)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Extract `assemble_provider_chain()` sync helper from `build_provider_chain()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs and sync/async split
- Update root `CLAUDE.md`: add Codex to features, providers, and config sections

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions github-actions bot added the scope: docs Documentation label Mar 10, 2026
@Sanjeev-S Sanjeev-S marked this pull request as draft March 10, 2026 14:47
@Sanjeev-S
Copy link
Copy Markdown
Contributor Author

Converting to draft while I rework this and rebase

Copy link
Copy Markdown
Collaborator

@zmanian zmanian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-review: Previous feedback well addressed, one new issue

The fix commit (af603be) systematically addressed all 16 items from both reviews. Good work. Specifically:

Blocking items -- all resolved:

  • src/llm/CLAUDE.md updated with Codex provider docs and sync/async split
  • /tmp/ test path replaced with tempfile::tempdir()
  • build_provider_chain() sync signature preserved; build_provider_chain_async() added; shared logic extracted to assemble_provider_chain()

Non-blocking items -- all resolved:

  • generate_pkce() dead code removed
  • Device code response body redacted from debug log (now logs byte count)
  • refresh_tokens() switched from .json() to .form() per OAuth spec
  • Clone removed from OpenAiCodexSession, fields restricted to pub(crate)
  • Client builder error propagated instead of silent fallback
  • set_model() in TokenRefreshingProvider delegates to inner
  • /tmp/ test path in token_refreshing.rs also fixed
  • request_timeout_secs from config passed through to provider
  • Retry-After header parsing implemented (both delay-seconds and HTTP-date)
  • normalize_schema_strict() reused for Codex tool definitions
  • Warning log for dropped image attachments added
  • Doc comments on list_models() and include field added
  • OPENAI_CODEX_API_URL added to .env.example
  • Unrelated .worktrees .gitignore change reverted

One new blocking issue

Env var name mismatch between docs and config parser:

.env.example and src/llm/CLAUDE.md document:

  • OPENAI_CODEX_AUTH_URL
  • OPENAI_CODEX_API_URL

But src/config/llm.rs reads:

  • OPENAI_CODEX_AUTH_ENDPOINT
  • OPENAI_CODEX_API_BASE_URL

Users who follow the documented env var names will have their overrides silently ignored. Either rename the optional_env() calls in src/config/llm.rs to match the documented names, or update the docs to match the code. I'd suggest matching the docs since _URL is shorter and consistent with what's already published in .env.example.

Minor nit (non-blocking)

The body_text variable in device_code_login() parse error still includes the raw response body in the error message (line ~2040 of the session file). The debug log was correctly redacted, but on a parse failure the body leaks into the error string. Consider redacting there too, or at minimum truncating it.

Overall this is a solid implementation. Fix the env var name mismatch and it's good to go.

@Sanjeev-S Sanjeev-S force-pushed the feat/openai-codex-provider branch from af603be to 6e0f803 Compare March 11, 2026 13:01
Sanjeev-S added a commit to Sanjeev-S/ironclaw that referenced this pull request Mar 11, 2026
Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Restore sync `build_provider_chain()` for backward compat; add async variant
  `build_provider_chain_async()` for codex (which needs async OAuth)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Extract `assemble_provider_chain()` sync helper from `build_provider_chain()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs and sync/async split
- Update root `CLAUDE.md`: add Codex to features, providers, and config sections

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Sanjeev-S Sanjeev-S force-pushed the feat/openai-codex-provider branch from 6e0f803 to 215825e Compare March 11, 2026 13:36
Sanjeev-S added a commit to Sanjeev-S/ironclaw that referenced this pull request Mar 11, 2026
Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Inline codex dispatch into `build_provider_chain()` (single async function,
  no separate `assemble_provider_chain()` helper — matches main's pattern)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Fix codex error message in `create_llm_provider()` for clarity
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions github-actions bot added scope: agent Agent core (agent loop, router, scheduler) scope: channel Channel infrastructure scope: channel/web Web gateway channel scope: channel/wasm WASM channel runtime scope: tool Tool infrastructure labels Mar 11, 2026
@github-actions github-actions bot added scope: db/libsql libSQL / Turso backend scope: safety Prompt injection defense scope: workspace Persistent memory / workspace scope: orchestrator Container orchestrator scope: worker Container worker scope: secrets Secrets management scope: extensions Extension management scope: sandbox Docker sandbox scope: ci CI/CD workflows scope: dependencies Dependency updates labels Mar 11, 2026
Sanjeev-S and others added 4 commits March 11, 2026 13:51
Add OpenAiCodex as a new LLM backend variant with config for auth
endpoint, API base URL, client ID, and session persistence path.

The session manager implements OpenAI's device code auth flow
(headless-friendly, no browser required on the server) with automatic
token refresh, following the same persistence pattern as the existing
NEAR AI session manager.

Closes nearai#742

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Native Responses API client for chatgpt.com/backend-api/codex/responses,
the endpoint that works with ChatGPT subscription tokens. Handles SSE
streaming, text completions, and tool call round-trips.

Token-refreshing decorator wraps the provider to pre-emptively refresh
OAuth tokens before API calls and retry once on auth failures. Reports
zero cost since billing is through subscription.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…zard

Connect the new provider to the LLM factory, add openai_codex to the
CLI --backend flag, and add it as an option in the onboarding wizard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Inline codex dispatch into `build_provider_chain()` (single async function,
  no separate `assemble_provider_chain()` helper — matches main's pattern)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Fix codex error message in `create_llm_provider()` for clarity
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Sanjeev-S Sanjeev-S force-pushed the feat/openai-codex-provider branch from 215825e to bd588e0 Compare March 11, 2026 14:02
@Sanjeev-S
Copy link
Copy Markdown
Contributor Author

Thank you for the detailed feedback @zmanian and @ilblackdragon. I addressed the comments.

@zmanian Since your last review, I changed back the build_provider_chain_async that I added since build_provider_chain already changed to async in a prior diff as I rebased.

@Sanjeev-S Sanjeev-S marked this pull request as ready for review March 11, 2026 14:19
@justinfiore
Copy link
Copy Markdown

@Sanjeev-S Thanks for putting this PR together.
FWIW, I tried out your branch and it worked perfectly for me.
Obviously, that isn't "full QA", but it is a data point.
Looking forward to when this gets merged in.

@KatarinaYuan
Copy link
Copy Markdown

Looking forward for its release!!

@justinfiore
Copy link
Copy Markdown

@Sanjeev-S and @zmanian do you know what is preventing this from being merged?
Is it just the merge conflicts? or is something else holding it up?
would love to see this get merged.

ilblackdragon added a commit that referenced this pull request Mar 20, 2026
#744)

Security:
- Add SSRF validation (validate_base_url) on OPENAI_CODEX_AUTH_URL and
  OPENAI_CODEX_API_URL, matching the pattern used by all other base URL
  configs (regression test for #1103 included)

Correctness:
- Add missing cache_write_multiplier() and cache_read_discount() trait
  delegation in TokenRefreshingProvider
- Cap device-code polling backoff at 60s to prevent unbounded interval
  growth on repeated 429 responses
- Default expires_in to 3600s when server returns 0, preventing
  immediately-expired sessions
- Fix pre-existing SseEvent::JobResult missing fallback_deliverable field
  in job_monitor.rs tests

Cleanup:
- Extract duplicated make_test_jwt() and test_codex_config() into shared
  codex_test_helpers module

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ilblackdragon
Copy link
Copy Markdown
Member

Thanks for the work on this, @Sanjeev-S! I've picked up your changes and continued them in #1461.

The new PR includes your original work plus:

  • Merge with latest staging (resolved 4 conflicts)
  • SSRF validation on OPENAI_CODEX_AUTH_URL and OPENAI_CODEX_API_URL (the one security gap found during review)
  • Missing trait method delegation, backoff cap, expires_in edge case fix
  • Shared test helpers deduplication

You're credited as co-author on all commits. Feel free to review the new PR!

@ilblackdragon
Copy link
Copy Markdown
Member

Superseded by #1461 (takeover with merge + fixes). Thank you @Sanjeev-S for the original implementation!

ilblackdragon added a commit that referenced this pull request Mar 20, 2026
…1461)

* feat(llm): add OpenAI Codex backend config and OAuth session manager

Add OpenAiCodex as a new LLM backend variant with config for auth
endpoint, API base URL, client ID, and session persistence path.

The session manager implements OpenAI's device code auth flow
(headless-friendly, no browser required on the server) with automatic
token refresh, following the same persistence pattern as the existing
NEAR AI session manager.

Closes #742

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): add Responses API client and token-refreshing decorator

Native Responses API client for chatgpt.com/backend-api/codex/responses,
the endpoint that works with ChatGPT subscription tokens. Handles SSE
streaming, text completions, and tool call round-trips.

Token-refreshing decorator wraps the provider to pre-emptively refresh
OAuth tokens before API calls and retry once on auth failures. Reports
zero cost since billing is through subscription.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): wire OpenAI Codex into provider factory, CLI, and setup wizard

Connect the new provider to the LLM factory, add openai_codex to the
CLI --backend flag, and add it as an option in the onboarding wizard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(llm): address PR #744 review feedback (20 items)

Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Inline codex dispatch into `build_provider_chain()` (single async function,
  no separate `assemble_provider_chain()` helper — matches main's pattern)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Fix codex error message in `create_llm_provider()` for clarity
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address review feedback and harden OpenAI Codex provider (takeover #744)

Security:
- Add SSRF validation (validate_base_url) on OPENAI_CODEX_AUTH_URL and
  OPENAI_CODEX_API_URL, matching the pattern used by all other base URL
  configs (regression test for #1103 included)

Correctness:
- Add missing cache_write_multiplier() and cache_read_discount() trait
  delegation in TokenRefreshingProvider
- Cap device-code polling backoff at 60s to prevent unbounded interval
  growth on repeated 429 responses
- Default expires_in to 3600s when server returns 0, preventing
  immediately-expired sessions
- Fix pre-existing SseEvent::JobResult missing fallback_deliverable field
  in job_monitor.rs tests

Cleanup:
- Extract duplicated make_test_jwt() and test_codex_config() into shared
  codex_test_helpers module

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address PR review feedback on OpenAI Codex provider (#1461)

- Login command now resolves OPENAI_CODEX_* env overrides even when
  LLM_BACKEND isn't set to openai_codex (Copilot review)
- Setup wizard "Keep current provider?" for codex no longer re-triggers
  device code login — mirrors Bedrock's keep-and-return pattern (Copilot)
- Revert provider init log from info back to debug (Copilot)
- Add warning log when token expires_in=0, before defaulting to 3600s
  (Gemini review)

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Sanjeev Suresh <Sanjeev-S@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
zmanian pushed a commit that referenced this pull request Mar 21, 2026
…1461)

* feat(llm): add OpenAI Codex backend config and OAuth session manager

Add OpenAiCodex as a new LLM backend variant with config for auth
endpoint, API base URL, client ID, and session persistence path.

The session manager implements OpenAI's device code auth flow
(headless-friendly, no browser required on the server) with automatic
token refresh, following the same persistence pattern as the existing
NEAR AI session manager.

Closes #742

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): add Responses API client and token-refreshing decorator

Native Responses API client for chatgpt.com/backend-api/codex/responses,
the endpoint that works with ChatGPT subscription tokens. Handles SSE
streaming, text completions, and tool call round-trips.

Token-refreshing decorator wraps the provider to pre-emptively refresh
OAuth tokens before API calls and retry once on auth failures. Reports
zero cost since billing is through subscription.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): wire OpenAI Codex into provider factory, CLI, and setup wizard

Connect the new provider to the LLM factory, add openai_codex to the
CLI --backend flag, and add it as an option in the onboarding wizard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(llm): address PR #744 review feedback (20 items)

Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Inline codex dispatch into `build_provider_chain()` (single async function,
  no separate `assemble_provider_chain()` helper — matches main's pattern)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Fix codex error message in `create_llm_provider()` for clarity
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address review feedback and harden OpenAI Codex provider (takeover #744)

Security:
- Add SSRF validation (validate_base_url) on OPENAI_CODEX_AUTH_URL and
  OPENAI_CODEX_API_URL, matching the pattern used by all other base URL
  configs (regression test for #1103 included)

Correctness:
- Add missing cache_write_multiplier() and cache_read_discount() trait
  delegation in TokenRefreshingProvider
- Cap device-code polling backoff at 60s to prevent unbounded interval
  growth on repeated 429 responses
- Default expires_in to 3600s when server returns 0, preventing
  immediately-expired sessions
- Fix pre-existing SseEvent::JobResult missing fallback_deliverable field
  in job_monitor.rs tests

Cleanup:
- Extract duplicated make_test_jwt() and test_codex_config() into shared
  codex_test_helpers module

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address PR review feedback on OpenAI Codex provider (#1461)

- Login command now resolves OPENAI_CODEX_* env overrides even when
  LLM_BACKEND isn't set to openai_codex (Copilot review)
- Setup wizard "Keep current provider?" for codex no longer re-triggers
  device code login — mirrors Bedrock's keep-and-return pattern (Copilot)
- Revert provider init log from info back to debug (Copilot)
- Add warning log when token expires_in=0, before defaulting to 3600s
  (Gemini review)

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Sanjeev Suresh <Sanjeev-S@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
zmanian pushed a commit that referenced this pull request Mar 21, 2026
…1461)

* feat(llm): add OpenAI Codex backend config and OAuth session manager

Add OpenAiCodex as a new LLM backend variant with config for auth
endpoint, API base URL, client ID, and session persistence path.

The session manager implements OpenAI's device code auth flow
(headless-friendly, no browser required on the server) with automatic
token refresh, following the same persistence pattern as the existing
NEAR AI session manager.

Closes #742

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): add Responses API client and token-refreshing decorator

Native Responses API client for chatgpt.com/backend-api/codex/responses,
the endpoint that works with ChatGPT subscription tokens. Handles SSE
streaming, text completions, and tool call round-trips.

Token-refreshing decorator wraps the provider to pre-emptively refresh
OAuth tokens before API calls and retry once on auth failures. Reports
zero cost since billing is through subscription.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): wire OpenAI Codex into provider factory, CLI, and setup wizard

Connect the new provider to the LLM factory, add openai_codex to the
CLI --backend flag, and add it as an option in the onboarding wizard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(llm): address PR #744 review feedback (20 items)

Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Inline codex dispatch into `build_provider_chain()` (single async function,
  no separate `assemble_provider_chain()` helper — matches main's pattern)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Fix codex error message in `create_llm_provider()` for clarity
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address review feedback and harden OpenAI Codex provider (takeover #744)

Security:
- Add SSRF validation (validate_base_url) on OPENAI_CODEX_AUTH_URL and
  OPENAI_CODEX_API_URL, matching the pattern used by all other base URL
  configs (regression test for #1103 included)

Correctness:
- Add missing cache_write_multiplier() and cache_read_discount() trait
  delegation in TokenRefreshingProvider
- Cap device-code polling backoff at 60s to prevent unbounded interval
  growth on repeated 429 responses
- Default expires_in to 3600s when server returns 0, preventing
  immediately-expired sessions
- Fix pre-existing SseEvent::JobResult missing fallback_deliverable field
  in job_monitor.rs tests

Cleanup:
- Extract duplicated make_test_jwt() and test_codex_config() into shared
  codex_test_helpers module

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address PR review feedback on OpenAI Codex provider (#1461)

- Login command now resolves OPENAI_CODEX_* env overrides even when
  LLM_BACKEND isn't set to openai_codex (Copilot review)
- Setup wizard "Keep current provider?" for codex no longer re-triggers
  device code login — mirrors Bedrock's keep-and-return pattern (Copilot)
- Revert provider init log from info back to debug (Copilot)
- Add warning log when token expires_in=0, before defaulting to 3600s
  (Gemini review)

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Sanjeev Suresh <Sanjeev-S@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
bkutasi pushed a commit to bkutasi/ironclaw that referenced this pull request Mar 28, 2026
…earai#1461)

* feat(llm): add OpenAI Codex backend config and OAuth session manager

Add OpenAiCodex as a new LLM backend variant with config for auth
endpoint, API base URL, client ID, and session persistence path.

The session manager implements OpenAI's device code auth flow
(headless-friendly, no browser required on the server) with automatic
token refresh, following the same persistence pattern as the existing
NEAR AI session manager.

Closes nearai#742

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): add Responses API client and token-refreshing decorator

Native Responses API client for chatgpt.com/backend-api/codex/responses,
the endpoint that works with ChatGPT subscription tokens. Handles SSE
streaming, text completions, and tool call round-trips.

Token-refreshing decorator wraps the provider to pre-emptively refresh
OAuth tokens before API calls and retry once on auth failures. Reports
zero cost since billing is through subscription.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): wire OpenAI Codex into provider factory, CLI, and setup wizard

Connect the new provider to the LLM factory, add openai_codex to the
CLI --backend flag, and add it as an option in the onboarding wizard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(llm): address PR nearai#744 review feedback (20 items)

Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Inline codex dispatch into `build_provider_chain()` (single async function,
  no separate `assemble_provider_chain()` helper — matches main's pattern)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Fix codex error message in `create_llm_provider()` for clarity
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address review feedback and harden OpenAI Codex provider (takeover nearai#744)

Security:
- Add SSRF validation (validate_base_url) on OPENAI_CODEX_AUTH_URL and
  OPENAI_CODEX_API_URL, matching the pattern used by all other base URL
  configs (regression test for nearai#1103 included)

Correctness:
- Add missing cache_write_multiplier() and cache_read_discount() trait
  delegation in TokenRefreshingProvider
- Cap device-code polling backoff at 60s to prevent unbounded interval
  growth on repeated 429 responses
- Default expires_in to 3600s when server returns 0, preventing
  immediately-expired sessions
- Fix pre-existing SseEvent::JobResult missing fallback_deliverable field
  in job_monitor.rs tests

Cleanup:
- Extract duplicated make_test_jwt() and test_codex_config() into shared
  codex_test_helpers module

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address PR review feedback on OpenAI Codex provider (nearai#1461)

- Login command now resolves OPENAI_CODEX_* env overrides even when
  LLM_BACKEND isn't set to openai_codex (Copilot review)
- Setup wizard "Keep current provider?" for codex no longer re-triggers
  device code login — mirrors Bedrock's keep-and-return pattern (Copilot)
- Revert provider init log from info back to debug (Copilot)
- Add warning log when token expires_in=0, before defaulting to 3600s
  (Gemini review)

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Sanjeev Suresh <Sanjeev-S@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
drchirag1991 pushed a commit to drchirag1991/ironclaw that referenced this pull request Apr 8, 2026
…earai#1461)

* feat(llm): add OpenAI Codex backend config and OAuth session manager

Add OpenAiCodex as a new LLM backend variant with config for auth
endpoint, API base URL, client ID, and session persistence path.

The session manager implements OpenAI's device code auth flow
(headless-friendly, no browser required on the server) with automatic
token refresh, following the same persistence pattern as the existing
NEAR AI session manager.

Closes nearai#742

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): add Responses API client and token-refreshing decorator

Native Responses API client for chatgpt.com/backend-api/codex/responses,
the endpoint that works with ChatGPT subscription tokens. Handles SSE
streaming, text completions, and tool call round-trips.

Token-refreshing decorator wraps the provider to pre-emptively refresh
OAuth tokens before API calls and retry once on auth failures. Reports
zero cost since billing is through subscription.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(llm): wire OpenAI Codex into provider factory, CLI, and setup wizard

Connect the new provider to the LLM factory, add openai_codex to the
CLI --backend flag, and add it as an option in the onboarding wizard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(llm): address PR nearai#744 review feedback (20 items)

Review fixes for the OpenAI Codex provider PR:

- Remove dead `generate_pkce()` code (device flow gets PKCE from server)
- Fix `refresh_tokens()` to use `.form()` instead of `.json()` per OAuth spec
- Inline codex dispatch into `build_provider_chain()` (single async function,
  no separate `assemble_provider_chain()` helper — matches main's pattern)
- Remove Clone from `OpenAiCodexSession`, restrict fields to `pub(crate)`
- Propagate HTTP client builder error instead of silent fallback
- Redact device code response body from debug log
- Change `set_model()` in TokenRefreshingProvider to delegate to inner
- Replace hardcoded `/tmp/` test path with `tempfile::tempdir()`
- Accept `request_timeout_secs` from config instead of hardcoded 300s
- Parse `Retry-After` header on 429 responses (matches nearai_chat.rs pattern)
- Reuse `normalize_schema_strict()` for Codex tool definitions
- Add warning log for dropped image attachments
- Add doc comments on `list_models()` and `include` field
- Add `OPENAI_CODEX_API_URL` to `.env.example`
- Fix codex error message in `create_llm_provider()` for clarity
- Revert unrelated `.worktrees` addition to `.gitignore`
- Update `src/llm/CLAUDE.md` with Codex provider docs

[skip-regression-check]

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address review feedback and harden OpenAI Codex provider (takeover nearai#744)

Security:
- Add SSRF validation (validate_base_url) on OPENAI_CODEX_AUTH_URL and
  OPENAI_CODEX_API_URL, matching the pattern used by all other base URL
  configs (regression test for nearai#1103 included)

Correctness:
- Add missing cache_write_multiplier() and cache_read_discount() trait
  delegation in TokenRefreshingProvider
- Cap device-code polling backoff at 60s to prevent unbounded interval
  growth on repeated 429 responses
- Default expires_in to 3600s when server returns 0, preventing
  immediately-expired sessions
- Fix pre-existing SseEvent::JobResult missing fallback_deliverable field
  in job_monitor.rs tests

Cleanup:
- Extract duplicated make_test_jwt() and test_codex_config() into shared
  codex_test_helpers module

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address PR review feedback on OpenAI Codex provider (nearai#1461)

- Login command now resolves OPENAI_CODEX_* env overrides even when
  LLM_BACKEND isn't set to openai_codex (Copilot review)
- Setup wizard "Keep current provider?" for codex no longer re-triggers
  device code login — mirrors Bedrock's keep-and-return pattern (Copilot)
- Revert provider init log from info back to debug (Copilot)
- Add warning log when token expires_in=0, before defaulting to 3600s
  (Gemini review)

Co-Authored-By: Sanjeev-S <Sanjeev-S@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Sanjeev Suresh <Sanjeev-S@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor: new First-time contributor risk: high Safety, secrets, auth, or critical infrastructure scope: agent Agent core (agent loop, router, scheduler) scope: channel/cli TUI / CLI channel scope: channel/wasm WASM channel runtime scope: channel/web Web gateway channel scope: channel Channel infrastructure scope: ci CI/CD workflows scope: db/libsql libSQL / Turso backend scope: db/postgres PostgreSQL backend scope: db Database trait / abstraction scope: dependencies Dependency updates scope: docs Documentation scope: extensions Extension management scope: llm LLM integration scope: orchestrator Container orchestrator scope: safety Prompt injection defense scope: sandbox Docker sandbox scope: secrets Secrets management scope: setup Onboarding / setup scope: tool/builder Dynamic tool builder scope: tool/builtin Built-in tools scope: tool/mcp MCP client scope: tool/wasm WASM tool sandbox scope: tool Tool infrastructure scope: worker Container worker scope: workspace Persistent memory / workspace size: XL 500+ changed lines

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Add OpenAI Codex (ChatGPT subscription) as LLM provider

5 participants