Torch not compiled with CUDA enabled

Bug #2051023 reported by JP Ventura
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
pytorch (Ubuntu)
New
Undecided
Unassigned

Bug Description

In [1]: import torch

In [2]: torch.cuda.is_available()
Out[2]: False

In [3]: torch.zeros(1).cuda()
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[3], line 1
----> 1 torch.zeros(1).cuda()

File /usr/lib/python3/dist-packages/torch/cuda/__init__.py:239, in _lazy_init()
    235 raise RuntimeError(
    236 "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
    237 "multiprocessing, you must use the 'spawn' start method")
    238 if not hasattr(torch._C, '_cuda_getDeviceCount'):
--> 239 raise AssertionError("Torch not compiled with CUDA enabled")
    240 if _cudart is None:
    241 raise AssertionError(
    242 "libcudart functions unavailable. It looks like you have a broken build?")

AssertionError: Torch not compiled with CUDA enabled

Revision history for this message
JP Ventura (jpventura) wrote :

Packages nvidia-cg-toolkit, nvidia-cuda-toolkit-doc, nvidia-cuda-toolkit-gcc, and libcudart12 were install from Canonical repositories. However the bug persists.

If I create a python or a conda environment, torch installed by pip work perfectly.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.