Preseed auth and trust settings for codex CLI.#9376
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
|
I'm starting a first review of this pull request. You can view the conversation on Warp. I completed the review and posted feedback on this pull request. Comment Powered by Oz |
There was a problem hiding this comment.
Overview
This PR seeds Codex auth and trust/config files for third-party harness runs, including API-key auth, trusted project entries, child repo trust, and a hardcoded regional OpenAI base URL.
Concerns
- Seeded
auth.jsonwrites the OpenAI API key through the generic JSON writer, so newly-created secret files may be world-readable depending on the process umask. - Auth key resolution does not match the terminal environment precedence, which can seed Codex with a different key than the one the launched process is meant to use.
openai_base_urlis overwritten unconditionally, which can break existing Codex configs or non-staging credentials.
Verdict
Found: 1 critical, 2 important, 0 suggestions
Request changes
Comment /oz-review on this pull request to retrigger a review (up to 3 times on the same pull request).
Powered by Oz
| if auth.auth_mode.is_none() { | ||
| auth.auth_mode = Some(CODEX_AUTH_MODE_API_KEY.to_owned()); | ||
| } | ||
| write_json_file(auth_path, &auth, "Failed to serialize Codex auth.json") |
There was a problem hiding this comment.
🚨 [CRITICAL] auth.json stores the API key, but write_json_file uses std::fs::write so newly-created files follow the process umask, commonly 0644; write this secret file with restrictive permissions like Codex's own 0600 storage path.
| /// `OPENAI_API_KEY`, falling back to the `OPENAI_API_KEY` env var. Returns `None` if | ||
| /// neither is set or both are empty. | ||
| fn resolve_openai_api_key(secrets: &HashMap<String, ManagedSecretValue>) -> Option<String> { | ||
| if let Some(ManagedSecretValue::RawValue { value }) = secrets.get(OPENAI_API_KEY_ENV) { |
There was a problem hiding this comment.
OPENAI_API_KEY, but the terminal env setup intentionally lets existing env vars take precedence, so auth.json can be seeded with a different credential than the launched Codex process should use.
| ) | ||
| })?; | ||
|
|
||
| set_codex_openai_base_url(&mut doc, CODEX_OPENAI_BASE_URL); |
There was a problem hiding this comment.
openai_base_url on every Codex run, even for user env auth or existing logins, which can break custom/proxy/global configs; only set the US endpoint for the managed staging key or preserve a user-provided value.

Description
This PR seeds the trust and auth config files for the codex harness, so that we don't get interactive dialogs re: trusting project folders or setting up auth when running in an autonomous cloud agent context. This is handled similarly to the Claude Code and Gemini settings configs.
Of note:
config.tomlfor both the working dir and any children git repos of that dir. This is relevant in the cloud agent case since we create aworkspace/dir and clone all environment repos into that dir, but we want to make sure they're trusted as well.ManagedSecrettype that can take in both an OpenAI API key and optionally a base URL to use it with.Testing
Added unit tests that cover setting up the settings files and making sure that we don't clobber any existing settings.
Tested manually after removing all of my local codex config to make sure that we don't get popups and can run queries correctly:
Demo.of.codex.initial.setup.mov
(Loom is having an incident but the show must go on, hence the QuickTime video)
Agent Mode