fix(setup): preserve model selection on provider re-run#679
fix(setup): preserve model selection on provider re-run#679Protocol-zero-0 wants to merge 4 commits intonearai:stagingfrom
Conversation
Keep the previously selected model when onboarding is re-run with the same LLM backend so users are not forced to re-enter it. Add regression tests for same-backend retention and backend-switch reset behavior, and document the setup rule in the onboarding spec. Made-with: Cursor
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the user experience of the setup wizard by intelligently managing the selected LLM model. Previously, re-running the setup process would unconditionally clear the chosen model, forcing users to re-enter it even if they were using the same backend. The changes introduce a new helper function that ensures the model selection is preserved when the backend is not changed, only clearing it when a different backend is explicitly chosen. This streamlines the setup process and reduces redundant user input. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request fixes a bug where the selected model was unconditionally cleared when re-running the setup wizard, even if the LLM provider remained the same. The fix introduces a new helper function, set_llm_backend_preserving_model, which conditionally clears the selected_model only when the backend provider actually changes. This new function is consistently applied across all provider setup helpers, which not only fixes the bug but also makes the behavior uniform. The pull request includes regression tests covering the new logic and updates the README.md to document the corrected behavior. The changes are correct and improve the user experience of the setup wizard.
There was a problem hiding this comment.
Pull request overview
This PR fixes a setup-wizard regression where re-running onboarding with the same LLM backend unnecessarily cleared the previously selected model, forcing users to re-enter it.
Changes:
- Introduce a shared helper to set
llm_backendwhile only clearingselected_modelwhen the backend actually changes. - Update provider setup paths to use the new helper consistently.
- Add regression tests for the “preserve vs clear model” behavior and update the onboarding spec.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
src/setup/wizard.rs |
Adds set_llm_backend_preserving_model() and routes provider setup flows through it; adds regression tests. |
src/setup/README.md |
Updates onboarding documentation to reflect the new “preserve model unless backend changes” behavior. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -183,13 +183,17 @@ Switching between them overwrites the same credential slot. | |||
| - Pre-configured OpenAI-compatible preset with base URL `https://openrouter.ai/api/v1` | |||
| - Delegates to `setup_api_key_provider()` with a display name override ("OpenRouter") | |||
| - Sets `llm_backend = "openai_compatible"` and `openai_compatible_base_url` automatically | |||
| - Clears `selected_model` so Step 4 prompts for a model name (manual text input, no API-based model fetching) | |||
| - Clears `selected_model` only when switching from a different backend, so | |||
There was a problem hiding this comment.
This section references setup_openrouter, but there doesn’t appear to be any such function in the codebase (provider setup is dispatched via run_provider_setup() and registry SetupHints). Consider updating the spec to describe the actual implementation entrypoint(s) so future changes don’t chase a nonexistent helper.
Clarify that OpenRouter setup is dispatched through run_provider_setup and setup_api_key_provider rather than a standalone helper. This keeps the setup spec aligned with the actual implementation referenced in PR review feedback. Made-with: Cursor
|
Addressed the review feedback in commit I updated |
zmanian
left a comment
There was a problem hiding this comment.
The helper function is clean and correct. Tests and spec update included.
Blocking: Two call sites missed -- setup_anthropic_oauth and setup_ollama_generic still use the inline pattern instead of the new set_llm_backend_preserving_model() helper. Per CLAUDE.md's "fix the pattern, not just the instance" rule, all call sites should use the shared helper. This is a ~4-line change.
Non-blocking:
- Consider adding a test for the
Noneinitial state (first-time setup wherellm_backendisNone) - Minor spec phrasing ambiguity: "Clears selected_model only when switching from a different backend" could be clearer
Document the same-backend retention rule more explicitly and add a regression test for the first-time backend selection case. This tightens the onboarding spec around the helper semantics without changing the feature scope. Made-with: Cursor
|
Follow-up pushed in What I updated:
Also, I double-checked the two call sites mentioned in review:
Local validation run:
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| fn test_set_llm_backend_preserves_model_when_backend_was_unset() { | ||
| let mut wizard = SetupWizard::new(); | ||
| wizard.settings.selected_model = Some("gpt-4o".to_string()); | ||
|
|
||
| wizard.set_llm_backend_preserving_model("openai"); | ||
|
|
||
| assert_eq!(wizard.settings.llm_backend.as_deref(), Some("openai")); | ||
| assert_eq!(wizard.settings.selected_model, None); |
There was a problem hiding this comment.
The test name test_set_llm_backend_preserves_model_when_backend_was_unset doesn’t match the asserted behavior: when llm_backend is unset, set_llm_backend_preserving_model() treats this as a backend change and clears selected_model (the test asserts selected_model == None). Rename the test (or adjust the expectation) so the name reflects what’s actually being verified.
| **OpenRouter** (via `run_provider_setup()` and `setup_api_key_provider()`): | ||
| - Pre-configured OpenAI-compatible preset with base URL `https://openrouter.ai/api/v1` | ||
| - Delegates to `setup_api_key_provider()` with a display name override ("OpenRouter") | ||
| - Sets `llm_backend = "openai_compatible"` and `openai_compatible_base_url` automatically | ||
| - Clears `selected_model` so Step 4 prompts for a model name (manual text input, no API-based model fetching) | ||
| - Preserves `selected_model` on a same-backend re-run | ||
| - Clears `selected_model` only when switching from a different backend, so | ||
| Step 4 prompts for a compatible model when needed |
There was a problem hiding this comment.
This OpenRouter onboarding spec is out of sync with the current provider registry: providers.json defines OpenRouter as its own backend id ("openrouter") with its own env var/secret (OPENROUTER_API_KEY / llm_openrouter_api_key), not as llm_backend = "openai_compatible". Please update these bullets (and any related text) to reflect that OpenRouter is stored as openrouter and only shares the openai_compatible_base_url field as a backward-compat fallback in config resolution.
Align the setup spec with the current OpenRouter backend identity and rename the unset-backend regression test to match the behavior it verifies.
|
Pushed a follow-up in to address the remaining review drift:\n\n- corrected the OpenRouter setup docs to match the current backend/secret/env names (, , )\n- clarified that is only kept as a compatibility fallback\n- renamed the unset-backend regression test so the name matches the asserted behavior\n\nI also kicked off the focused test target locally again: . |
|
Pushed a follow-up in Updated:
This should address the remaining doc/test mismatches called out after the earlier helper rollout. |
|
Focused local validation completed after the latest follow-up:
So the doc/test alignment follow-up is now both pushed and locally revalidated. |
|
Thanks again for the review. I believe the blocking items from the latest review are now addressed on this branch:
If you get a chance, I'd appreciate a re-review. |
zmanian
left a comment
There was a problem hiding this comment.
All blocking feedback addressed:
- Missed call sites fixed -- all 5 call sites now use
set_llm_backend_preserving_model(), includingsetup_oauth_providerand the generic provider fallback path. - First-time setup test added --
test_set_llm_backend_clears_model_when_backend_was_unsetcovers theNoneinitial state. - Spec phrasing clarified -- README now explicitly documents the preserve/clear behavior.
Clean, focused change. Approved.
Note: This is a fork PR -- CI only ran classify/scope. Verified the logic by reading the diff; the change is small (helper function + consistent call-site updates) and well-tested.
|
Replacement PR opened as #987 with the approved model-retention fix rebased onto current |
|
Superseded by #987, which has merged into staging. |
Summary
selected_modelwhen the setup wizard is re-run with the same LLM backendsrc/setup/README.mdProblem
Re-running onboarding remembered most prior settings, but the model name was lost and had to be entered again.
Root cause
Provider setup helpers reset
selected_modelunconditionally, even when the backend did not change.Fix
Introduce a shared helper that updates the backend while only clearing
selected_modelon real backend changes.Tests
cargo test --lib test_set_llm_backendcargo fmt --checkcargo clippy --all --benches --tests --examples --all-features -- -D warningsScope / Non-goals
Made with Cursor