-
Notifications
You must be signed in to change notification settings - Fork 44
Description
Hello! In short, in last few days I've encountered all recent problems credited (probably) to the change of NVIDIA driver: memory access violation at the line with a call to cuda.to_device, Numba not finding dlls, weird script crashes with nothing but return values.
Let me start with what I have right now. I've tried many suggestions in other questions, none of them worked. Now I have a fresh install of Anaconda (25.5.1) and downgraded my NVIDIA driver to version 576.52. I'm working on Windows 10. Here is what I currently have:
Traceback (most recent call last):
File "E:\PyCharm_Projects\MyProject\simple-x.py", line 11, in <module>
test_cuda[1, 1](x)
~~~~~~~~~~~~~~~^^^
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\dispatcher.py", line 700, in __call__
return self.dispatcher.call(
~~~~~~~~~~~~~~~~~~~~^
args, self.griddim, self.blockdim, self.stream, self.sharedmem
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\dispatcher.py", line 1022, in call
kernel = _dispatcher.Dispatcher._cuda_call(self, *args)
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\dispatcher.py", line 1030, in _compile_for_args
return self.compile(tuple(argtypes))
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba\core\compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\dispatcher.py", line 1296, in compile
kernel = _Kernel(self.py_func, argtypes, **self.targetoptions)
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba\core\compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\dispatcher.py", line 168, in __init__
asm = lib.get_asm_str()
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\codegen.py", line 218, in get_asm_str
arch = nvrtc.get_arch_option(*cc)
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\cudadrv\nvrtc.py", line 482, in get_arch_option
arch = find_closest_arch((major, minor))
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\cudadrv\nvrtc.py", line 454, in find_closest_arch
supported_ccs = get_supported_ccs()
File "E:\AnacondaEnvs\repro\Lib\site-packages\numba_cuda\numba\cuda\cudadrv\nvrtc.py", line 492, in get_supported_ccs
retcode, archs = bindings_nvrtc.nvrtcGetSupportedArchs()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "cuda/bindings/nvrtc.pyx", line 185, in cuda.bindings.nvrtc.nvrtcGetSupportedArchs
File "cuda/bindings/nvrtc.pyx", line 165, in cuda.bindings.nvrtc.nvrtcGetNumSupportedArchs
File "cuda/bindings/cynvrtc.pyx", line 14, in cuda.bindings.cynvrtc.nvrtcGetNumSupportedArchs
File "cuda/bindings/_bindings/cynvrtc.pyx", line 130, in cuda.bindings._bindings.cynvrtc._nvrtcGetNumSupportedArchs
File "cuda/bindings/_bindings/cynvrtc.pyx", line 108, in cuda.bindings._bindings.cynvrtc.cuPythonInit
File "cuda/bindings/_bindings/cynvrtc.pyx", line 44, in cuda.bindings._bindings.cynvrtc._cuPythonInit
File "cuda/bindings/_bindings/cynvrtc.pyx", line 45, in cuda.bindings._bindings.cynvrtc._cuPythonInit
File "E:\AnacondaEnvs\repro\Lib\site-packages\cuda\pathfinder\_dynamic_libs\load_nvidia_dynamic_lib.py", line 140, in load_nvidia_dynamic_lib
return _load_lib_no_cache(libname)
File "E:\AnacondaEnvs\repro\Lib\site-packages\cuda\pathfinder\_dynamic_libs\load_nvidia_dynamic_lib.py", line 57, in _load_lib_no_cache
finder.raise_not_found_error()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "E:\AnacondaEnvs\repro\Lib\site-packages\cuda\pathfinder\_dynamic_libs\find_nvidia_dynamic_lib.py", line 210, in raise_not_found_error
raise DynamicLibNotFoundError(f'Failure finding "{self.lib_searched_for}": {err}\n{att}')
cuda.pathfinder._dynamic_libs.load_dl_common.DynamicLibNotFoundError: Failure finding "nvrtc*.dll": No such file: nvrtc*.dll, No such file: nvrtc*.dll, No such file: nvrtc*.dll, No such file: nvrtc*.dll
Process finished with exit code 1
The environment was prepared as a basic Anaconda environment via PyCharm first (named repro), then NumPy was installed through PyCharm and then I ran conda install -n repro -c conda-forge numba-cuda "cuda-version=12" as it is written in an installation guide.
The simple test code is this one
import numpy as np
from numba import cuda
@cuda.jit
def test_cuda(x):
x[0] += 1
x = np.zeros(10)
x = cuda.to_device(x)
test_cuda[1, 1](x)
print(x[0])as was suggested in a similar issue.
The reason why I didn't go with standard conda create -n tst-cu -c conda-forge numpy numba-cuda "cuda-version=12" is that in that case script generates an even more cryptic output like Process finished with exit code -1066598273 (0xC06D007F) after just stepping to from numba import cuda line.
Here is the list of packages installed in this environment:
# Name Version Build Channel
blas 1.0 mkl
bzip2 1.0.8 h2bbff1b_6
ca-certificates 2025.11.12 h4c7d964_0 conda-forge
cuda-bindings 12.9.4 py313h73a8f02_1 conda-forge
cuda-cccl_win-64 12.9.27 h57928b3_0 conda-forge
cuda-core 0.4.1 cuda12_py313h62f4c8b_0 conda-forge
cuda-crt-dev_win-64 12.9.86 h57928b3_2 conda-forge
cuda-crt-tools 12.9.86 h57928b3_2 conda-forge
cuda-cudart 12.9.79 he0c23c2_0 conda-forge
cuda-cudart-dev 12.9.79 he0c23c2_0 conda-forge
cuda-cudart-dev_win-64 12.9.79 he0c23c2_0 conda-forge
cuda-cudart-static 12.9.79 he0c23c2_0 conda-forge
cuda-cudart-static_win-64 12.9.79 he0c23c2_0 conda-forge
cuda-cudart_win-64 12.9.79 he0c23c2_0 conda-forge
cuda-nvcc-dev_win-64 12.9.86 h36c15f3_2 conda-forge
cuda-nvcc-impl 12.9.86 h53cbb54_2 conda-forge
cuda-nvcc-tools 12.9.86 he0c23c2_2 conda-forge
cuda-nvrtc 12.9.86 hac47afa_1 conda-forge
cuda-nvvm-dev_win-64 12.9.86 h57928b3_2 conda-forge
cuda-nvvm-impl 12.9.86 h2466b09_2 conda-forge
cuda-nvvm-tools 12.9.86 h2466b09_2 conda-forge
cuda-pathfinder 1.3.2 pyhcf101f3_0 conda-forge
cuda-version 12.9 h4f385c5_3 conda-forge
expat 2.7.3 h9214b88_0
intel-openmp 2025.0.0 haa95532_1164
libffi 3.4.4 hd77b12b_1
libmpdec 4.0.0 h827c3e9_0
libnvjitlink 12.9.86 hac47afa_2 conda-forge
libnvptxcompiler-dev 12.9.86 h57928b3_2 conda-forge
libnvptxcompiler-dev_win-64 12.9.86 h57928b3_2 conda-forge
libzlib 1.3.1 h02ab6af_0
llvmlite 0.45.1 py313h5c49287_0 conda-forge
mkl 2025.0.0 h5da7b33_930
mkl-service 2.5.2 py313h0b37514_0
mkl_fft 2.1.1 py313hbc2a22c_0
mkl_random 1.3.0 py313h42c1672_0
numba 0.62.1 py313h924e429_0 conda-forge
numba-cuda 0.20.1 pyhcf101f3_0 conda-forge
numpy 2.3.4 py313h050da96_1
numpy-base 2.3.4 py313h1e017a8_1
openssl 3.6.0 h725018a_0 conda-forge
pip 25.3 pyhc872135_0
python 3.13.9 h260b955_100_cp313
python_abi 3.13 1_cp313
setuptools 80.9.0 py313haa95532_0
sqlite 3.51.0 hda9a48d_0
tbb 2022.0.0 h214f63a_0
tbb-devel 2022.0.0 h214f63a_0
tk 8.6.15 hf199647_0
tzdata 2025b h04d1e81_0
ucrt 10.0.22621.0 haa95532_0
vc 14.3 h2df5915_10
vc14_runtime 14.44.35208 h4927774_10
vs2015_runtime 14.44.35208 ha6b5a95_10
wheel 0.45.1 py313haa95532_0
xz 5.6.4 h4754444_1
zlib 1.3.1 h02ab6af_0
zstd 1.5.7 hbeecb71_2 conda-forge
and nvidia-smi running from it gives
Mon Nov 17 23:36:09 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 576.52 Driver Version: 576.52 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:01:00.0 On | N/A |
| 0% 39C P8 7W / 120W | 1415MiB / 6144MiB | 7% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
Currently I'm lost and can't figure out any workaround. Hope that you can help me find the way to make this work again, thanks in advance for your help!