Fix upsample converter not properly registered#2683
Fix upsample converter not properly registered#2683narendasan merged 1 commit intopytorch:mainfrom HolyWu:upsample
Conversation
|
Thanks for the analysis and pointing out the above! I looked at it and looks like in the above case the AOT trace is returning the decomposition for Post the torch.export or the AOT trace the graph decomposes into a big graph As far as I understand the
|
|
@apbose - does the decomposition into that large set of operators you showed still occur if we remove the following two lines (but don't add anything to TensorRT/py/torch_tensorrt/dynamo/lowering/_decomposition_groups.py Lines 160 to 161 in ad74a73 |
|
@gs-olive, yes the above operation decomposes into the large set of ops when the two lines shown above has been commented. |
Description
Partially #2665
Even though the operator is properly registered along with #2681 being applied, the operator is still decomposed into lower-level operators rather than converted using this converter, just like #2665 (comment). Adding
aten.upsample_bilinear2d.defaultandaten.upsample_bilinear2d.vecto torch_disabled_decompositions doesn't help. Compiling the model underwith torch.inference_mode()also doesn't help. At the end I find out that I have to remove these two lines and this line in PyTorch to bypass the decomposition and then this converter finally works.Type of change
Checklist: