[6/n]decouple quantization implementation from vLLM dependency#10750
[6/n]decouple quantization implementation from vLLM dependency#10750FlamingoPg merged 31 commits intosgl-project:mainfrom
Conversation
Summary of ChangesHello @Hongbosherlock, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the quantization layer by removing its direct dependency on vLLM. It achieves this by internalizing several quantization schemes and utility functions, such as Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request decouples the quantization implementation from the vLLM dependency by vendoring or re-implementing the necessary quantization schemes and utilities within the sglang repository. This is a positive step towards making the project more self-contained. However, the changes introduce a significant amount of commented-out code that should be removed for better maintainability. More critically, one of the new files (compressed_tensors_w8a8_int8.py) appears to be broken due to references to variables defined in commented-out code, which will cause runtime errors. I've left detailed comments on these issues.
...on/sglang/srt/layers/quantization/compressed_tensors/schemes/compressed_tensors_w8a8_int8.py
Outdated
Show resolved
Hide resolved
python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors_moe.py
Outdated
Show resolved
Hide resolved
python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py
Outdated
Show resolved
Hide resolved
python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py
Show resolved
Hide resolved
python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py
Outdated
Show resolved
Hide resolved
python/sglang/srt/layers/quantization/compressed_tensors/schemes/__init__.py
Outdated
Show resolved
Hide resolved
...on/sglang/srt/layers/quantization/compressed_tensors/schemes/compressed_tensors_w8a16_fp8.py
Outdated
Show resolved
Hide resolved
...on/sglang/srt/layers/quantization/compressed_tensors/schemes/compressed_tensors_w8a16_fp8.py
Outdated
Show resolved
Hide resolved
python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py
Outdated
Show resolved
Hide resolved
python/sglang/srt/layers/quantization/compressed_tensors/schemes/compressed_tensors_wNa16.py
Outdated
Show resolved
Hide resolved
|
@AniZpZ this pr is ready for review |
|
Looks good, lets move on |
|
Sorry to cancel the CI, unit-test-backend-1-gpu (2) won't succeed since it is flaky, and I am fixing and need to verify it now. Will merge main and retrigger-ci for this PR when I fix it. |
Thanks! |
Motivation
Remove vLLM-dependency-test
Remove the compressed_tensors dependency from vLLM
Now supported quant method:
Modifications
Accuracy Tests
Benchmarking and Profiling
Checklist