Skip to content

feat: add support for gemini-3-pro-preview model#2202

Merged
naorpeled merged 3 commits intoThe-PR-Agent:mainfrom
claudiunicolaa:fix/max-tokens-gemini-3-pro-preview
Feb 7, 2026
Merged

feat: add support for gemini-3-pro-preview model#2202
naorpeled merged 3 commits intoThe-PR-Agent:mainfrom
claudiunicolaa:fix/max-tokens-gemini-3-pro-preview

Conversation

@claudiunicolaa
Copy link
Copy Markdown
Contributor

@claudiunicolaa claudiunicolaa commented Feb 4, 2026

User description

  • Add gemini/gemini-3-pro-preview with 1,048,576 max tokens
  • Add vertex_ai/gemini-3-pro-preview with 1,048,576 max tokens
  • Add test coverage for both model variants
  • Update documentation with usage examples for both variants

This enables users to utilize Google's Gemini 3 Pro Preview model through both Google AI Studio and Vertex AI providers with full 1M+ token context window support.

Resources:


PR Type

Enhancement


Description

  • Add gemini-3-pro-preview model support for both Google AI and Vertex AI

  • Configure 1,048,576 max tokens for both model variants

  • Add parameterized test coverage for both provider implementations

  • Update documentation with usage examples for Google AI variant


Diagram Walkthrough

flowchart LR
  A["gemini-3-pro-preview<br/>Model Support"] --> B["Model Registry<br/>Configuration"]
  A --> C["Test Coverage<br/>Parameterized Tests"]
  A --> D["Documentation<br/>Usage Examples"]
  B --> E["Google AI Studio<br/>1M+ tokens"]
  B --> F["Vertex AI<br/>1M+ tokens"]
Loading

File Walkthrough

Relevant files
Configuration changes
__init__.py
Register gemini-3-pro-preview models with max tokens         

pr_agent/algo/init.py

  • Add vertex_ai/gemini-3-pro-preview with 1,048,576 max tokens
  • Add gemini/gemini-3-pro-preview with 1,048,576 max tokens
  • Both entries placed in appropriate alphabetical positions in model
    registry
+2/-0     
Tests
test_get_max_tokens.py
Add parameterized test for gemini-3-pro-preview variants 

tests/unittest/test_get_max_tokens.py

  • Add parameterized test for both gemini-3-pro-preview variants
  • Test validates 1,048,576 max tokens for Google AI and Vertex AI
    providers
  • Uses pytest parametrization to consolidate test cases
+14/-0   
Documentation
qodo_merge_models.md
Document gemini-3-pro-preview Google AI variant usage       

docs/docs/usage-guide/qodo_merge_models.md

  • Add gemini/gemini-3-pro-preview to supported models list
  • Add configuration example for Google AI Studio variant
  • Maintains consistency with existing Vertex AI variant documentation
+8/-0     

- Add gemini/gemini-3-pro-preview with 1,048,576 max tokens
- Add vertex_ai/gemini-3-pro-preview with 1,048,576 max tokens
- Add test coverage for both model variants
- Update documentation with usage examples for both variants

This enables users to utilize Google's Gemini 3 Pro Preview model
through both Google AI Studio and Vertex AI providers with full
1M+ token context window support.
@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects Bot commented Feb 4, 2026

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🟢
No security concerns identified No security vulnerabilities detected by AI analysis. Human verification advised for critical code.
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Consistent Naming Conventions

Objective: All new variables, functions, and classes must follow the project's established naming
standards

Status: Passed

No Dead or Commented-Out Code

Objective: Keep the codebase clean by ensuring all submitted code is active and necessary

Status: Passed

Robust Error Handling

Objective: Ensure potential errors and edge cases are anticipated and handled gracefully throughout
the code

Status: Passed

Single Responsibility for Functions

Objective: Each function should have a single, well-defined responsibility

Status: Passed

When relevant, utilize early return

Objective: In a code snippet containing multiple logic conditions (such as 'if-else'), prefer an
early return on edge cases than deep nesting

Status: Passed

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@claudiunicolaa claudiunicolaa changed the title fix: add support for gemini-3-pro-preview model feat: add support for gemini-3-pro-preview model Feb 4, 2026
@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects Bot commented Feb 4, 2026

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
General
Parametrize test for both variants
Suggestion Impact:The test was converted to a parametrized pytest test over the two model strings, removing the duplicated assertions and using a single assertion for both variants.

code diff:

+    @pytest.mark.parametrize("model", [
+        "gemini/gemini-3-pro-preview",
+        "vertex_ai/gemini-3-pro-preview",
+    ])
+    def test_gemini_3_pro_preview(self, monkeypatch, model):
+        fake_settings = type("", (), {
+            "config": type("", (), {
+                "custom_model_max_tokens": 0,
+                "max_model_tokens": 0,
             })()
         })()
+        monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
+        assert get_max_tokens(model) == 1048576
 
-        monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
-
-        # Test Google AI Studio variant
-        model_gemini = "gemini/gemini-3-pro-preview"
-        expected = 1048576
-        assert get_max_tokens(model_gemini) == expected
-
-        # Test Vertex AI variant
-        model_vertex = "vertex_ai/gemini-3-pro-preview"
-        assert get_max_tokens(model_vertex) == expected

Refactor the test_gemini_3_pro_preview test by using pytest.mark.parametrize to
test both model variants, which reduces code duplication.

tests/unittest/test_get_max_tokens.py [69-87]

-def test_gemini_3_pro_preview(self, monkeypatch):
-    ...
-    # Test Google AI Studio variant
-    model_gemini = "gemini/gemini-3-pro-preview"
-    expected = 1048576
-    assert get_max_tokens(model_gemini) == expected
+@pytest.mark.parametrize("model", [
+    "gemini/gemini-3-pro-preview",
+    "vertex_ai/gemini-3-pro-preview",
+])
+def test_gemini_3_pro_preview(self, monkeypatch, model):
+    fake_settings = type("", (), {
+        "config": type("", (), {
+            "custom_model_max_tokens": 0,
+            "max_model_tokens": 0,
+        })()
+    })()
+    monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
+    assert get_max_tokens(model) == 1048576
 
-    # Test Vertex AI variant
-    model_vertex = "vertex_ai/gemini-3-pro-preview"
-    assert get_max_tokens(model_vertex) == expected
-

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 5

__

Why: The suggestion correctly identifies duplicated test logic and proposes a valid refactoring using pytest.mark.parametrize, which improves code maintainability and readability.

Low
  • Update
  • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

- Consolidate test cases for gemini-3-pro-preview into a parameterized test
- Remove redundant assertions and simplify the test structure
- Ensure both Google AI Studio and Vertex AI variants are covered in a single test

This enhances maintainability and readability of the test suite for the gemini-3-pro-preview model.
- `vertex_ai/gemini-3-pro-preview`
- `gemini/gemini-3-pro-preview`
- `gpt-5-2025-08-07`
- `gpt-5.2-2025-12-11`
Copy link
Copy Markdown

@ifox777 ifox777 Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ask о чем код?


```toml
[config]
model="gemini/gemini-3-pro-preview"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ask о чем код?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Эта строка — не “код” приложения, а пример настройки в документации (формат TOML).

model="gemini/gemini-3-pro-preview" показывает, как в конфиге Qodo Merge зафиксировать используемую модель на gemini-3-pro-preview у провайдера gemini (в отличие от варианта выше в доке с vertex_ai/gemini-3-pro-preview).

То есть смысл: при запуске/работе Qodo Merge будет выбирать именно эту модель, а не какую-то другую по умолчанию.

@naorpeled naorpeled merged commit 58edfba into The-PR-Agent:main Feb 7, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants