Describe the bug
gpt-image-2 — Azure OpenAI's newest image generation model, publicly previewed on 2026-04-21 (docs) — is missing from the SDK's ImageModel literal and from several parameter/docstring references. Passing model="gpt-image-2" works at runtime (because the type alias is Union[str, ImageModel, None]), but:
mypy / pyright users get a spurious type-error on a valid, GA'd model name.
- The parameter docs in
image_generate_params.py, image_edit_params.py, responses/tool.py, and responses/tool_param.py still list only gpt-image-1 / gpt-image-1-mini / gpt-image-1.5, which gives the impression that gpt-image-2 isn't supported.
Everything at the request-body and response-parsing layer (quality=low, output_format, background, moderation, custom sizes, usage.{input,output,total}_tokens) already works — I've verified end-to-end via respx-mocked calls against AzureOpenAI.images.generate / .edit, so the fix really is just the Literal + docstring bump that usually follows every new model release.
To Reproduce
from typing import get_args
from openai.types import ImageModel
print(get_args(ImageModel))
# ('gpt-image-1.5', 'dall-e-2', 'dall-e-3', 'gpt-image-1', 'gpt-image-1-mini')
# -> 'gpt-image-2' is missing
And under a strict type-checker:
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint="<YOUR_ENDPOINT>",
api_key="<YOUR_KEY>",
api_version="2024-10-21",
)
# mypy: error: Argument "model" to "generate" ... has incompatible type
# "Literal['gpt-image-2']"; expected "str | ImageModel | None"
# (only surfaces if the caller narrows model to a Literal, but it also
# means IDE autocomplete / docs don't suggest gpt-image-2.)
client.images.generate(
model="gpt-image-2",
prompt="a cute corgi puppy sitting on grass",
size="1024x1024",
quality="low",
n=1,
)
Expected behavior
gpt-image-2 is a first-class value in openai.types.ImageModel and in the three responses tool-param Literals:
src/openai/types/image_model.py
src/openai/types/responses/tool.py (ImageGeneration.model)
src/openai/types/responses/tool_param.py (ImageGeneration.model)
And the docstrings in image_generate_params.py / image_edit_params.py mention gpt-image-2 alongside the existing GPT-Image variants.
Actual behavior
ImageModel / ImageGeneration.model Literals contain only gpt-image-1, gpt-image-1-mini, gpt-image-1.5, plus the legacy dall-e-*. Docstrings say e.g.:
One of `dall-e-2`, `dall-e-3`, or a GPT image model (`gpt-image-1`,
`gpt-image-1-mini`, `gpt-image-1.5`).
Verification that the rest of the surface works
I confirmed with respx that — once you pass model="gpt-image-2" as a plain str — the SDK:
- serializes
quality="low", output_format, background, moderation, n, and arbitrary multiples-of-16 sizes (e.g. 1088x1088) correctly into the request body;
- round-trips the response, including the
usage object (input_tokens, output_tokens, output_tokens_details, etc.);
- sends multipart/form-data correctly for
images.edit.
So this really is just a Literal/docstring sync, not a wire-level issue. A minimal example using AzureOpenAI + gpt-image-2 is attached in a companion PR to examples/azure_image.py.
References
Happy to do a PR for types/image_model.py and the responses/tool*.py Literals if it would be useful, but I realize those files are generated by Stainless so the change should probably flow in from the OpenAPI spec on the next sync. Filing this so it's on the radar.
Additional context
I've also put together a small examples/azure_image.py showing end-to-end generate + edit against an Azure GPT-Image deployment (with quality="low" to keep costs in check), since examples/azure.py only covers chat completions today. Opening that as a separate PR.
Describe the bug
gpt-image-2— Azure OpenAI's newest image generation model, publicly previewed on 2026-04-21 (docs) — is missing from the SDK'sImageModelliteral and from several parameter/docstring references. Passingmodel="gpt-image-2"works at runtime (because the type alias isUnion[str, ImageModel, None]), but:mypy/pyrightusers get a spurious type-error on a valid, GA'd model name.image_generate_params.py,image_edit_params.py,responses/tool.py, andresponses/tool_param.pystill list onlygpt-image-1 / gpt-image-1-mini / gpt-image-1.5, which gives the impression thatgpt-image-2isn't supported.Everything at the request-body and response-parsing layer (quality=low, output_format, background, moderation, custom sizes,
usage.{input,output,total}_tokens) already works — I've verified end-to-end viarespx-mocked calls againstAzureOpenAI.images.generate/.edit, so the fix really is just the Literal + docstring bump that usually follows every new model release.To Reproduce
And under a strict type-checker:
Expected behavior
gpt-image-2is a first-class value inopenai.types.ImageModeland in the threeresponsestool-param Literals:src/openai/types/image_model.pysrc/openai/types/responses/tool.py(ImageGeneration.model)src/openai/types/responses/tool_param.py(ImageGeneration.model)And the docstrings in
image_generate_params.py/image_edit_params.pymentiongpt-image-2alongside the existing GPT-Image variants.Actual behavior
ImageModel/ImageGeneration.modelLiterals contain onlygpt-image-1,gpt-image-1-mini,gpt-image-1.5, plus the legacydall-e-*. Docstrings say e.g.:Verification that the rest of the surface works
I confirmed with
respxthat — once you passmodel="gpt-image-2"as a plainstr— the SDK:quality="low",output_format,background,moderation,n, and arbitrary multiples-of-16 sizes (e.g.1088x1088) correctly into the request body;usageobject (input_tokens,output_tokens,output_tokens_details, etc.);images.edit.So this really is just a Literal/docstring sync, not a wire-level issue. A minimal example using
AzureOpenAI+gpt-image-2is attached in a companion PR toexamples/azure_image.py.References
2024-10-21is the latest GA version that supports the image endpoints used here)Happy to do a PR for
types/image_model.pyand theresponses/tool*.pyLiterals if it would be useful, but I realize those files are generated by Stainless so the change should probably flow in from the OpenAPI spec on the next sync. Filing this so it's on the radar.Additional context
I've also put together a small
examples/azure_image.pyshowing end-to-endgenerate+editagainst an Azure GPT-Image deployment (withquality="low"to keep costs in check), sinceexamples/azure.pyonly covers chat completions today. Opening that as a separate PR.