Bump pytorch to 2.0 for AMD Users on Linux#10465
Bump pytorch to 2.0 for AMD Users on Linux#10465AUTOMATIC1111 merged 3 commits intoAUTOMATIC1111:devfrom baptisterajaut:master
Conversation
So apparently it works now? Before you would get "Pytorch cant use the GPU" but not anymore.
If only i proofread what i wrote
|
Since I cannot verify any of this I'd like some comments from AMD users. |
|
well, yeah, but maybe it works on your card sand is fucked on another |
|
I haven't tested this PR, but for what it's worth I've been using: Ubuntu 22.04.2 LTS
|
|
6800xt works. Tired with --opt-split-attention, about 2x or a bit more faster than colab free gpu, hires fix is slow af tho. I'm getting oom when doing hires fix. dpm++ 2m karras, 20 steps, 640x1024, 1.5x nearest exact 0.55 denoise
Happens even when I use --opt-sdp-attention which makes my ram usage hover between 4-8gb. This behavior reminds me of what happened on the colab gpus when I first tried torch 2.0 on them. I can go through with this setup --opt-split-attention-invokeai --medvram compared to colab T4 If these results can be consistent that would be great. Unfortunately, the vram usage during hires fix with this setup is 14500~ mb, meaning if I were to load in extensions such as controlnet, I would crash from running out of memory |
Unfortunately confirmed.... RX 5000 series owners to be precise. Manual downgrading isn't working as unfortunately the PyTorch webpage lists previous versions, but when you try to install them they aren't found. We need to figure out how to get a version of torch 1.13 I think. Unless somebody can find a workaround to get torch 2 working on this range of cards. |
|
Hi everybody, i'm the guy who wrote that comment "# AMD users will still use torch 1.13 because 2.0 does not seem to work." wich was deleted by this PR (see #9404) Anyway, i agree that we shouldn't prevent every amd card from using pytorch2 (...even if it's already possible to force TORCH_COMMAND manually) if the problem is only on older cards. I've come up with a solution wich should be fine for everyone, or at least i hope: #11048 |
If you are on Python 3.11 probably is because the older pytorch is for python 3.10 only. You can try with a conda env: the ln command is a workaround to make the tmalloc code work |


Describe what this pull request is trying to achieve.
This pull achieves pytorch working again for AMD users. it seems the versions of torch 1.13 and torchvision are not available anymore on the pytorch repos, so bumping to this version make it available again.
Additional notes and description of your changes
Weirdly, torch 2 didnt seems to work before, but this one does. Maybe it requires rocm 5.4.2 and above?
I also push this to master so the webui would work again for everyone coming in.
Environment this was tested in