Skip to content

RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling cublasCreate(handle) #2417

@ghost

Description

🐛 Bug

When I run your code I get the following error:

RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`

My cuda installation is fine since I am able to run another open-source repository using cuda (the yolact repository).

To Reproduce (REQUIRED)

Input:

python detect.py --source /home/muhammadmehdi/PycharmProjects/VIDEOS/INTERIOR_LENGTHY --weights yolov5s.pt --conf 0.25

Output:

Traceback (most recent call last):
  File "detect.py", line 175, in <module>
    detect()
  File "detect.py", line 33, in detect
    model = attempt_load(weights, map_location=device)  # load FP32 model
  File "/home/muhammadmehdi/PycharmProjects/SORT_TRIANGULATION/YOLOV5/models/experimental.py", line 120, in attempt_load
    model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval())  # FP32 model
  File "/home/muhammadmehdi/PycharmProjects/SORT_TRIANGULATION/YOLOV5/models/yolo.py", line 169, in fuse
    m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv
  File "/home/muhammadmehdi/PycharmProjects/SORT_TRIANGULATION/YOLOV5/utils/torch_utils.py", line 185, in fuse_conv_and_bn
    fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.size()))
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`

Expected behavior

The code should perform inference on the images as outlined in the description here

Environment

If applicable, add screenshots to help explain your problem.

  • OS: [Ubuntu 20.04]
  • GPU [GeForce GTX 1650, 3914.1875MB]

Additional context

I tried with both conda and pip and both environments encounter the exact same error. My python version is 3.8.5 and torch version is 1.8.0

I also verified that I have the GPU version of pytorch installed by using the following code:

print(torch.cuda.current_device())
print(torch.cuda.device(0))
print(torch.cuda.device_count())
print(torch.cuda.get_device_name(0))
print(torch.cuda.is_available())

And the output I got was:

0
<torch.cuda.device object at 0x7f3664031460>
1
GeForce GTX 1650
True

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions