Torch not compiled with CUDA enabled
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
pytorch (Ubuntu) |
New
|
Undecided
|
Unassigned |
Bug Description
In [1]: import torch
In [2]: torch.cuda.
Out[2]: False
In [3]: torch.zeros(
-------
AssertionError Traceback (most recent call last)
Cell In[3], line 1
----> 1 torch.zeros(
File /usr/lib/
235 raise RuntimeError(
236 "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
237 "multiprocessing, you must use the 'spawn' start method")
238 if not hasattr(torch._C, '_cuda_
--> 239 raise AssertionError(
240 if _cudart is None:
241 raise AssertionError(
242 "libcudart functions unavailable. It looks like you have a broken build?")
AssertionError: Torch not compiled with CUDA enabled
Packages nvidia-cg-toolkit, nvidia- cuda-toolkit- doc, nvidia- cuda-toolkit- gcc, and libcudart12 were install from Canonical repositories. However the bug persists.
If I create a python or a conda environment, torch installed by pip work perfectly.