Skip to content

[model-gateway] add phi4 vision image processor#14430

Merged
slin1237 merged 5 commits intomainfrom
smg-mm-8/n
Dec 4, 2025
Merged

[model-gateway] add phi4 vision image processor#14430
slin1237 merged 5 commits intomainfrom
smg-mm-8/n

Conversation

@slin1237
Copy link
Collaborator

@slin1237 slin1237 commented Dec 4, 2025

Checklist

@github-actions github-actions bot added Multi-modal multi-modal language model model-gateway labels Dec 4, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @slin1237, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates support for the Phi4-Vision (Phi-4-multimodal) model by adding a dedicated image processor. This new processor handles the unique preprocessing steps required by Phi4, including its specific dynamic HD transform, normalization, and attention mask generation. The changes ensure that the model gateway can correctly process visual inputs for the Phi4 model, expanding the range of supported multimodal models.

Highlights

  • New Phi4-Vision Image Processor: Introduced a new Phi4VisionProcessor to support the Phi-4-multimodal model, enabling its specific image preprocessing requirements within the model gateway.
  • Dynamic HD Transform Implementation: Implemented the Dynamic HD transform for Phi4-Vision, which utilizes a 448x448 base resolution, distinct normalization ([0.5, 0.5, 0.5] mean/std), and a default of 36 max crops, differing from Phi3-Vision.
  • Attention Mask Generation: The new processor includes logic for generating per-crop attention masks and a specific token count formula, crucial for Phi4-Vision's architecture.
  • Golden Test Integration: Added comprehensive golden tests for the Phi4-Vision processor, comparing its output against HuggingFace's reference implementation for various image types and dimensions to ensure accuracy.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Phi4-Vision image processor, including the core Rust implementation, a Python script for generating test data, and comprehensive golden tests. The implementation is well-structured and correctly follows the logic of the reference implementation. My review focuses on improving code clarity, maintainability, and performance by leveraging more idiomatic Rust and library features, particularly from ndarray and image, to replace manual loops with more efficient slicing and high-level operations.

Comment on lines 462 to 483
// Global mask is all ones
for y in 0..mask_res {
for x in 0..mask_res {
combined_mask[[0, y, x]] = 1;
}
}

// Tile attention masks
for h_idx in 0..h_crops {
for w_idx in 0..w_crops {
let tile_idx = h_idx * w_crops + w_idx + 1; // +1 for global
let mask_y_start = h_idx * mask_res;
let mask_x_start = w_idx * mask_res;

for y in 0..mask_res {
for x in 0..mask_res {
combined_mask[[tile_idx, y, x]] =
attention_mask[[mask_y_start + y, mask_x_start + x]];
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The construction of the combined_mask can be simplified by using ndarray's slicing and filling capabilities, which is more idiomatic and can improve performance by avoiding manual loops.

        // Global mask is all ones
        combined_mask.slice_mut(s![0, .., ..]).fill(1);

        // Tile attention masks
        for h_idx in 0..h_crops {
            for w_idx in 0..w_crops {
                let tile_idx = h_idx * w_crops + w_idx + 1; // +1 for global
                let mask_y_start = h_idx * mask_res;
                let mask_x_start = w_idx * mask_res;

                let tile_mask = attention_mask.slice(s![
                    mask_y_start..mask_y_start + mask_res,
                    mask_x_start..mask_x_start + mask_res
                ]);
                combined_mask
                    .slice_mut(s![tile_idx, .., ..])
                    .assign(&tile_mask);
            }
        }

Comment on lines +547 to +560
for t in 0..num_crops {
for c in 0..3 {
for y in 0..base {
for x in 0..base {
pixel_values[[b, t, c, y, x]] = output[[t, c, y, x]];
}
}
}
for y in 0..mask_res {
for x in 0..mask_res {
attention_masks[[b, t, y, x]] = mask[[t, y, x]];
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The batch padding logic can be greatly simplified using ndarray slicing. This avoids nested loops, making the code more readable and potentially faster.

            pixel_values
                .slice_mut(s![b, 0..num_crops, .., .., ..])
                .assign(output);
            attention_masks
                .slice_mut(s![b, 0..num_crops, .., ..])
                .assign(mask);

Comment on lines +1060 to +1068
let rust_image_sizes: Vec<(u32, u32)> = match result.model_specific.get("image_sizes") {
Some(ModelSpecificValue::UintTensor { data, shape }) => {
let num_images = shape[0];
(0..num_images)
.map(|i| (data[i * 2], data[i * 2 + 1]))
.collect()
}
_ => panic!("Expected image_sizes in model_specific"),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to reconstruct image_sizes from a flat vector can be made more concise and idiomatic by using chunks_exact instead of manual indexing.

    let rust_image_sizes: Vec<(u32, u32)> = match result.model_specific.get("image_sizes") {
        Some(ModelSpecificValue::UintTensor { data, .. }) => {
            data.chunks_exact(2).map(|s| (s[0], s[1])).collect()
        }
        _ => panic!("Expected image_sizes in model_specific"),
    };

@slin1237 slin1237 added the run-ci label Dec 4, 2025
@slin1237 slin1237 merged commit 11d33c0 into main Dec 4, 2025
51 of 54 checks passed
@slin1237 slin1237 deleted the smg-mm-8/n branch December 4, 2025 15:40
tonyluj pushed a commit to openanolis/sglang that referenced this pull request Dec 5, 2025
tonyluj pushed a commit to openanolis/sglang that referenced this pull request Dec 5, 2025
yuchengz816-bot pushed a commit to yuchengz816-bot/sglang that referenced this pull request Dec 8, 2025
Kevin-XiongC pushed a commit to novitalabs/sglang that referenced this pull request Dec 9, 2025
dcampora pushed a commit to dcampora/sglang that referenced this pull request Dec 15, 2025
GuoYechang pushed a commit to GuoYechang/sglang that referenced this pull request Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

model-gateway Multi-modal multi-modal language model run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments