-
Notifications
You must be signed in to change notification settings - Fork 439
Closed
Milestone
Description
When I tried vulkaninfo in RHEL8, it failed.
$ docker run --runtime=nvidia --gpus all -it --rm nvcr.io/nvidia/clara-holoscan/holoscan:v3.2.0-dgpu /bin/bash
root@a26235e4fd7f:/opt/nvidia/holoscan# apt update && apt install vulkan-tools
root@a26235e4fd7f:/opt/nvidia/holoscan# vulkaninfo
Cannot create Vulkan instance.
This problem is often caused by a faulty installation of the Vulkan driver or attempting to use a GPU that does not support Vulkan.
ERROR at ./vulkaninfo/vulkaninfo.h:649:vkCreateInstance failed with ERROR_INCOMPATIBLE_DRIVER
It seems that /etc/vulkan/icd.d/ and libGLX.so is not mounted.
$ rpm -qa | grep nvidia-container-toolkit
nvidia-container-toolkit-1.17.6-1.x86_64
nvidia-container-toolkit-base-1.17.6-1.x86_64
$ rpm -qf /usr/share/vulkan/icd.d/nvidia_icd.x86_64.json
nvidia-driver-libs-575.51.03-1.el8.x86_64
$ docker run --runtime=nvidia --gpus all -it --rm nvcr.io/nvidia/clara-holoscan/holoscan:v3.2.0-dgpu /bin/bash
=========================
== NVIDIA Holoscan SDK ==
=========================
NVIDIA Holoscan SDK Version: 3.2.0
Container image Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Refer to /opt/nvidia/legal for inherited CUDA and TensorRT container licenses and copyrights.
This container includes the Holoscan libraries, GXF extensions, headers, example source
code, and sample datasets, as well as all Holoscan SDK dependencies.
Visit the User Guide to get started with the Holoscan SDK:
https://docs.nvidia.com/holoscan/sdk-user-guide/getting_started.html
Python, C++, and GXF examples are installed in /opt/nvidia/holoscan/examples alongside their source
code, and run instructions:
https://github.com/nvidia-holoscan/holoscan-sdk/tree/main/examples#readme.
See the HoloHub repository for a collection of Holoscan operators and applications:
https://github.com/nvidia-holoscan/holohub
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for NVIDIA Holoscan SDK. NVIDIA recommends the use of the following flags:
docker run --runtime=nvidia --gpus all --cap-add CAP_SYS_PTRACE --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...
root@a26235e4fd7f:/opt/nvidia/holoscan# apt update && apt install vulkan-tools
Get:1 https://download.docker.com/linux/ubuntu jammy InRelease [48.8 kB]
Get:2 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages [61.3 kB]
Get:3 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB]
Get:4 http://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB]
Get:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB]
Get:6 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB]
Get:7 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [1246 kB]
Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB]
Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [4410 kB]
Get:10 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB]
Get:11 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB]
Get:12 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1792 kB]
Get:13 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [55.7 kB]
Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [3264 kB]
Get:15 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1553 kB]
Get:16 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [4564 kB]
Get:17 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [35.2 kB]
Get:18 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [83.2 kB]
Get:19 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [2953 kB]
Get:20 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [47.7 kB]
Fetched 38.7 MB in 5s (8365 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
87 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
vulkan-tools
0 upgraded, 1 newly installed, 0 to remove and 87 not upgraded.
Need to get 241 kB of archives.
After this operation, 1124 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy/universe amd64 vulkan-tools amd64 1.3.204.0+dfsg1-1 [241 kB]
Fetched 241 kB in 2s (156 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 78, <> line 1.)
debconf: falling back to frontend: Readline
Selecting previously unselected package vulkan-tools.
(Reading database ... 25818 files and directories currently installed.)
Preparing to unpack .../vulkan-tools_1.3.204.0+dfsg1-1_amd64.deb ...
Unpacking vulkan-tools (1.3.204.0+dfsg1-1) ...
Setting up vulkan-tools (1.3.204.0+dfsg1-1) ...
root@a26235e4fd7f:/opt/nvidia/holoscan# vulkaninfo
Cannot create Vulkan instance.
This problem is often caused by a faulty installation of the Vulkan driver or attempting to use a GPU that does not support Vulkan.
ERROR at ./vulkaninfo/vulkaninfo.h:649:vkCreateInstance failed with ERROR_INCOMPATIBLE_DRIVER
root@a26235e4fd7f:/opt/nvidia/holoscan# find / -iname '*nvidia*.json*'
/etc/vulkan/implicit_layer.d/nvidia_layers.json
/usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
/usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
/usr/share/glvnd/egl_vendor.d/10_nvidia.json
root@a26235e4fd7f:/opt/nvidia/holoscan# cat /etc/vulkan/implicit_layer.d/nvidia_layers.json
{
"file_format_version" : "1.0.1",
"layers": [{
"name": "VK_LAYER_NV_optimus",
"type": "INSTANCE",
"library_path": "libGLX_nvidia.so.0",
"api_version" : "1.4.303",
"implementation_version" : "1",
"description" : "NVIDIA Optimus layer",
"functions": {
"vkGetInstanceProcAddr": "vk_optimusGetInstanceProcAddr",
"vkGetDeviceProcAddr": "vk_optimusGetDeviceProcAddr"
},
"enable_environment": {
"__NV_PRIME_RENDER_OFFLOAD": "1"
},
"disable_environment": {
"DISABLE_LAYER_NV_OPTIMUS_1": ""
}
},{
"name": "VK_LAYER_NV_present",
"type": "INSTANCE",
"library_path": "libnvidia-present.so.575.51.03",
"api_version" : "1.4.303",
"implementation_version" : "1",
"description" : "NVIDIA GR2608 layer",
"functions": {
"vkGetInstanceProcAddr": "vk_nvpGetInstanceProcAddr",
"vkGetDeviceProcAddr": "vk_nvpGetDeviceProcAddr"
},
"enable_environment": {
"NVPRESENT_ENABLE_SMOOTH_MOTION": "1"
},
"disable_environment": {
"DISABLE_LAYER_NV_GR2608_1": ""
}
}]
}
root@a26235e4fd7f:/opt/nvidia/holoscan# cat /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
{
"file_format_version" : "1.0.0",
"ICD" : {
"library_path" : "libnvidia-egl-wayland.so.1"
}
}
root@a26235e4fd7f:/opt/nvidia/holoscan# cat /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json {
"file_format_version" : "1.0.0",
"ICD" : {
"library_path" : "libnvidia-egl-gbm.so.1"
}
}
root@a26235e4fd7f:/opt/nvidia/holoscan# cat /usr/share/glvnd/egl_vendor.d/10_nvidia.json
{
"file_format_version" : "1.0.0",
"ICD" : {
"library_path" : "libEGL_nvidia.so.0"
}
}
root@a26235e4fd7f:/opt/nvidia/holoscan# ls /usr/lib64/libGLX*
ls: cannot access '/usr/lib64/libGLX*': No such file or directory
root@a26235e4fd7f:/opt/nvidia/holoscan# ls /usr/lib64/
gbm ld-linux-x86-64.so.2 libnvidia-egl-gbm.so.1 libnvidia-egl-gbm.so.1.1.2 libnvidia-egl-wayland.so.1 libnvidia-egl-wayland.so.1.1.13 mft xorg
root@a26235e4fd7f:/opt/nvidia/holoscan# nvidia-smi
Mon Jun 2 00:38:18 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.51.03 Driver Version: 575.51.03 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA RTX 4500 Ada Gene... Off | 00000000:47:00.0 Off | Off |
| 30% 33C P8 5W / 210W | 33MiB / 24570MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
this code seems to expect filename is nvidia_icd.json (not nvidia_icd.x86_64.json )
nvidia-container-toolkit/internal/discover/graphics.go
Lines 88 to 90 in 450f73a
| "vulkan/icd.d/nvidia_icd.json", | |
| "vulkan/icd.d/nvidia_layers.json", | |
| "vulkan/implicit_layer.d/nvidia_layers.json", |
this code seems that libGLX is not mounted.
nvidia-container-toolkit/internal/discover/graphics.go
Lines 111 to 123 in 450f73a
| []string{ | |
| // The libnvidia-egl-gbm and libnvidia-egl-wayland libraries do not | |
| // have the RM version. Use the *.* pattern to match X.Y.Z versions. | |
| "libnvidia-egl-gbm.so.*.*", | |
| "libnvidia-egl-wayland.so.*.*", | |
| // We include the following libraries to have them available for | |
| // symlink creation below: | |
| // If CDI injection is used, these should already be detected as: | |
| // * libnvidia-allocator.so.RM_VERSION | |
| // * libnvidia-vulkan-producer.so.RM_VERSION | |
| // but need to be handled for the legacy case too. | |
| "libnvidia-allocator.so." + cudaVersionPattern, | |
| "libnvidia-vulkan-producer.so." + cudaVersionPattern, |
Metadata
Metadata
Assignees
Labels
No labels