Skip to content

feat: add max token for gpt-5.3-chat#2280

Merged
naorpeled merged 1 commit intoqodo-ai:mainfrom
ElliotNguyen68:main
Mar 22, 2026
Merged

feat: add max token for gpt-5.3-chat#2280
naorpeled merged 1 commit intoqodo-ai:mainfrom
ElliotNguyen68:main

Conversation

@ElliotNguyen68
Copy link
Copy Markdown
Contributor

No description provided.

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Add max token limit for gpt-5.3-chat model

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add max token limit for gpt-5.3-chat model
• Set context window to 250K tokens with config override capability
Diagram
flowchart LR
  A["Model Configuration"] -- "Add gpt-5.3-chat" --> B["250K Token Limit"]
  B -- "Config Override" --> C["max_model_tokens Setting"]
Loading

Grey Divider

File Changes

1. pr_agent/algo/__init__.py ⚙️ Configuration changes +1/-0

Add gpt-5.3-chat model token configuration

• Added gpt-5.3-chat model entry to token limits dictionary
• Set maximum context window to 250000 tokens
• Included comment noting potential config override via max_model_tokens

pr_agent/algo/init.py


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Mar 21, 2026

Code Review by Qodo

🐞 Bugs (0) 📘 Rule violations (1) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider


Remediation recommended

1. Hardcoded gpt-5.3-chat tokens 📘 Rule violation ⚙ Maintainability
Description
The PR hardcodes a new model token limit ('gpt-5.3-chat': 250000) in Python source, changing
runtime behavior without using the documented TOML configuration mechanisms. This reduces
maintainability and makes the behavior harder to override consistently across deployments.
Code

pr_agent/algo/init.py[45]

+    'gpt-5.3-chat': 250000,  # 250K, but may be limited by config.max_model_tokens
Evidence
PR Compliance ID 7 requires behavior/configuration changes to be made via .pr_agent.toml and/or
pr_agent/settings/*.toml rather than hardcoding values in Python. The added line introduces a new
hardcoded token limit for gpt-5.3-chat directly in code.

AGENTS.md
pr_agent/algo/init.py[45-45]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new model token limit was added as a hardcoded constant in `pr_agent/algo/__init__.py`, which is a behavior/configuration change that should be driven by the repo’s TOML configuration mechanisms.
## Issue Context
The PR adds `'gpt-5.3-chat': 250000` to `MAX_TOKENS`, affecting token-limit behavior but not via `.pr_agent.toml` or `pr_agent/settings/`.
## Fix Focus Areas
- pr_agent/algo/__init__.py[45-45]
- pr_agent/settings/configuration.toml[28-33]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Grey Divider

Previous review results

Review updated until commit 86606ac

Results up to commit fbbfb6a


🐞 Bugs (0) 📘 Rule violations (1) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider
Remediation recommended
1. Hardcoded gpt-5.3-chat tokens 📘 Rule violation ⚙ Maintainability
Description
The PR hardcodes a new model token limit ('gpt-5.3-chat': 250000) in Python source, changing
runtime behavior without using the documented TOML configuration mechanisms. This reduces
maintainability and makes the behavior harder to override consistently across deployments.
Code

pr_agent/algo/init.py[45]

+    'gpt-5.3-chat': 250000,  # 250K, but may be limited by config.max_model_tokens
Evidence
PR Compliance ID 7 requires behavior/configuration changes to be made via .pr_agent.toml and/or
pr_agent/settings/*.toml rather than hardcoding values in Python. The added line introduces a new
hardcoded token limit for gpt-5.3-chat directly in code.

AGENTS.md
pr_agent/algo/init.py[45-45]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new model token limit was added as a hardcoded constant in `pr_agent/algo/__init__.py`, which is a behavior/configuration change that should be driven by the repo’s TOML configuration mechanisms.
## Issue Context
The PR adds `'gpt-5.3-chat': 250000` to `MAX_TOKENS`, affecting token-limit behavior but not via `.pr_agent.toml` or `pr_agent/settings/`.
## Fix Focus Areas
- pr_agent/algo/__init__.py[45-45]
- pr_agent/settings/configuration.toml[28-33]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider Grey Divider
Results up to commit 5aecfe0


🐞 Bugs (0) 📘 Rule violations (1) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider
Remediation recommended
1. Hardcoded gpt-5.3-chat tokens 📘 Rule violation ⚙ Maintainability
Description
The PR hardcodes a new model token limit ('gpt-5.3-chat': 250000) in Python source, changing
runtime behavior without using the documented TOML configuration mechanisms. This reduces
maintainability and makes the behavior harder to override consistently across deployments.
Code

pr_agent/algo/init.py[45]

+    'gpt-5.3-chat': 250000,  # 250K, but may be limited by config.max_model_tokens
Evidence
PR Compliance ID 7 requires behavior/configuration changes to be made via .pr_agent.toml and/or
pr_agent/settings/*.toml rather than hardcoding values in Python. The added line introduces a new
hardcoded token limit for gpt-5.3-chat directly in code.

AGENTS.md
pr_agent/algo/init.py[45-45]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new model token limit was added as a hardcoded constant in `pr_agent/algo/__init__.py`, which is a behavior/configuration change that should be driven by the repo’s TOML configuration mechanisms.

## Issue Context
The PR adds `'gpt-5.3-chat': 250000` to `MAX_TOKENS`, affecting token-limit behavior but not via `.pr_agent.toml` or `pr_agent/settings/`.

## Fix Focus Areas
- pr_agent/algo/__init__.py[45-45]
- pr_agent/settings/configuration.toml[28-33]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider Grey Divider
Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Mar 21, 2026

Persistent review updated to latest commit fbbfb6a

@ElliotNguyen68
Copy link
Copy Markdown
Contributor Author

ElliotNguyen68 commented Mar 21, 2026

hi @mrT23 , could you help to review and approve this

@naorpeled
Copy link
Copy Markdown
Collaborator

Hey @ElliotNguyen68,
thanks for opening this!

It seems the max token limit for this model is 128k
https://developers.openai.com/api/docs/models/gpt-5.3-chat-latest

Let me know if I'm missing anything 🙏

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Mar 22, 2026

Persistent review updated to latest commit 86606ac

@ElliotNguyen68
Copy link
Copy Markdown
Contributor Author

hi @naorpeled , yes you are correct, I wrongly check the model.
Please help to review it again.

@naorpeled naorpeled merged commit f5cf45b into qodo-ai:main Mar 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants