I ran the command 'nvidia-smi' and got Check cuda version 35 -> CUDA driver version is insufficient for CUDA runtime version Result = FAIL. nvidia-smi : Kernel API version mismatch. Any idea why this may be happening or how to fix it? Can't find anything else related to On our machine running on Ubuntu 18 OS, when we type nvidia-smi, we get this error: Failed to initialize NVML: Driver/library version mismatch Tensorflow is not able to use GPU Other details: echo PATH /home/sks/Deskt…ĬUDA version mismatch on Ubuntu 18.04, The output of nvidia-smi is only showing the current driver's CUDA compatability version, and not indicative of what CUDA is installed. 8.0, 9.0, etc.) The necessary support for the When I run nvidia-smi I get the following message: Failed to initialize NVML: Driver/library version mismatch An hour ago I received the same message and uninstalled my cuda library and I was able to run nvidia-smi, getting the following result:ĬUDA version mismatch, Now nvcc -V returns 9.2, but nvidia-smi says CUDA 10.0.
Architecture Select Target Platform Click on the green buttons that describe your target platform.ĭifferent CUDA versions shown by nvcc and NVIDIA-smi, CUDA has 2 primary APIs, the runtime and the driver API. CUDA Toolkit 11.0 Update 1 Downloads, Click on the green buttons that describe your target platform.