Implement PyTorch support for float8 types (F8_E5M2 and F8_E4M3)#404
Merged
Narsil merged 2 commits intohuggingface:mainfrom Jan 18, 2024
zeux:fp8-pt
Merged
Implement PyTorch support for float8 types (F8_E5M2 and F8_E4M3)#404Narsil merged 2 commits intohuggingface:mainfrom zeux:fp8-pt
Narsil merged 2 commits intohuggingface:mainfrom
zeux:fp8-pt
Conversation
Note that PyTorch name for e4m3 type has an extra "fn" prefix to match
MLIR, but the format should be the same ("fn" means "finite").
We also test that -0.5 roundtrips in both formats, which makes sure that
the format is preserved properly - both types are single-byte and have
the same representation for zero, but different representations for
-0.5.
Contributor
Author
|
|
|
Really need this! |
zeux
added a commit
to zeux/calm
that referenced
this pull request
Dec 29, 2023
NVidia GPUs support two fp8 types: e5m2 and e4m3. PyTorch supports both from version 2.1; note that safetensors currently does not support these fully, but it will once this PR gets merged: huggingface/safetensors#404 This change implements initial support for e5m2. e4m3 should be a better fit in general, but: - It has a smaller exponent range so it requires weight adjustment to fit into this range; Llama2 works fine without it but Mistral breaks due to small weights that get rounded to zero. - More critically, NV GPUs only support fp8 to half/float conversion natively since Hopper (SM9.0). fp8e5m2 has a fast emulation path because it has the same exponent range as fp16 (similarly to bfloat16, conversion just requires zero padding), but fp8e4m3 emulation is impractically slow. We currently just use builtin PyTorch conversion which results in an aggregate ~0.5% perplexity drop. This probably can be improved in the future. Warp-parallel matmul needs to process 4 elements at a time now so that we keep loading 4b per thread to maximize effective bandwidth.
1 task
Contributor
|
Thanks a lot for this PR, sorry I missed it when you published it. |
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR completes support for float8 types by making them available when using safetensors from Python with PyTorch; float8 types are supported by PyTorch since July (pytorch/pytorch#104242).
Note that PyTorch name for e4m3 type has an extra "fn" prefix to match MLIR, but the format should be the same ("fn" means "finite").
The added test checks that -0.5 roundtrips in both formats - both types are single-byte and have the same representation for zero, but different representations for -0.5.