disable sm100 for FlashMLA and fast-hadamard-transform in cuda12.6.1#11274
disable sm100 for FlashMLA and fast-hadamard-transform in cuda12.6.1#11274zhyncs merged 2 commits intosgl-project:mainfrom
Conversation
Summary of ChangesHello @gongwei-130, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses compatibility and build issues by disabling SM100 support for both FlashMLA and fast-hadamard-transform when building with CUDA 12.6.1. It achieves this by updating the commit reference for fast-hadamard-transform to a version that includes the necessary disablement and by introducing a conditional environment variable for FlashMLA within the Docker build process. This ensures that the build environment is correctly configured for the specified CUDA version, preventing potential conflicts or errors related to the SM100 architecture. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request correctly disables sm100 for FlashMLA and fast-hadamard-transform when building with CUDA 12.6.1. The changes, which involve updating a commit hash for fast-hadamard-transform and conditionally setting an environment variable for FlashMLA, are logical and effectively address the issue. My review includes one suggestion to add a comment in the Dockerfile to improve code maintainability by explaining the purpose of this version-specific workaround.
| if [ "$CUDA_VERSION" = "12.6.1" ]; then \ | ||
| export FLASH_MLA_DISABLE_SM100=1; \ | ||
| fi && \ |
There was a problem hiding this comment.
For better maintainability, it would be helpful to add a comment explaining why sm100 is being disabled for this specific CUDA version. This provides valuable context for future developers who might encounter this workaround.
For example:
# Disable sm100 for CUDA 12.6.1 to work around a build issue.| ARG DEEPEP_COMMIT=9af0e0d0e74f3577af1979c9b9e1ac2cad0104ee | ||
| ARG FLASHMLA_COMMIT=1408756a88e52a25196b759eaf8db89d2b51b5a1 | ||
| ARG FAST_HADAMARD_TRANSFORM_COMMIT=f3cdeed95b0f3284b5df3da9b3311d3d0600ce2b | ||
| ARG FAST_HADAMARD_TRANSFORM_COMMIT=7fd811c2b47f63b0b08d2582619f939e14dad77c |
There was a problem hiding this comment.
why do we need to update this commit
There was a problem hiding this comment.
this commit exclude SM100 from cuda12.6.1
Motivation
disable sm100 for FlashMLA and fast-hadamard-transform in cuda12.6.1
PR in fast-hadamard-transform to disable sm100 in cuda12.6.1
Dao-AILab/fast-hadamard-transform@7fd811c
Accuracy Tests
Benchmarking and Profiling
Checklist