Releases: NVIDIA/nvidia-container-toolkit
v1.15.0-rc.2
What's changed
- Extend the
runtime.nvidia.com/gpuCDI kind to support full-GPUs and MIG devices specified by index or UUID. - Fix bug when specifying
--dev-rootfor Tegra-based systems. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp.
- Added detection of libnvdxgdmal.so.1 on WSL2
- Use devRoot to resolve MIG device nodes.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems.
- Add
crunto the list of configured low-level runtimes. - Added support for
--ldconfig-pathtonvidia-ctk cdi generatecommand. - Fix
nvidia-ctk runtime configure --cdi.enabledfor Docker. - Add discovery of the GDRCopy device (
gdrdrv) if theNVIDIA_GDRCOPYenvironment variable of the container is set toenabled
Changes in libnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2
Changes in the toolkit-container
- Bump CUDA base image version to 12.3.1.
Full Changelog: v1.15.0-rc.1...v1.15.0-rc.2
v1.14.4
What's Changed
- Include
nvidia/nvoptix.binin list of graphics mounts. (#127) - Include
vulkan/icd.d/nvidia_layers.jsonin list of graphics mounts. (#127) - Fixed bug in
nvidia-ctk configcommand when using--set. The types of applied config options are now applied correctly. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp. (#110)
- Added detection of libnvdxgdmal.so.1 on WSL2.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems. (#110)
- Add
crunto the list of configured low-level runtimes. - Add
--cdi.enabledoption tonvidia-ctk runtime configurecommand to enable CDI in containerd. - Added support for
nvidia-ctk runtime configure --enable-cdifor thedockerruntime. Note that this requires Docker >= 25.
Changes in livnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2.
Changes in the toolkit-container
- Bumped CUDA base image version to 12.3.1.
Full Changelog: v1.14.3...v1.14.4
v1.15.0-rc.1
What's Changed
- Skip update of ldcache in containers without ldconfig. The .so.SONAME symlinks are still created.
- Normalize ldconfig path on use. This automatically adjust the ldconfig setting applied to ldconfig.real on systems where this exists.
- Include
nvidia/nvoptix.binin list of graphics mounts. - Include
vulkan/icd.d/nvidia_layers.jsonin list of graphics mounts. - Add support for
--library-search-pathstonvidia-ctk cdi generatecommand. - Add support for injecting /dev/nvidia-nvswitch* devices if the NVIDIA_NVSWITCH=enabled envvar is specified.
- Added support for
nvidia-ctk runtime configure --enable-cdifor thedockerruntime. Note that this requires Docker >= 25. - Fixed bug in
nvidia-ctk configcommand when using--set. The types of applied config options are now applied correctly. - Add
--relative-tooption tonvidia-ctk transform rootcommand. This controls whether the root transformation is applied to host or container paths.
Changes in livnvidia-container
- Fix device permission check when using cgroupv2 (fixes NVIDIA/libnvidia-container/#227)
Full Changelog: v1.14.3...v1.15.0-rc.1
v1.14.3
What's Changed
Changes in livnvidia-container
- Bumped version to
v1.14.3for NVIDIA Container Toolkit release
Changes in the toolkit-container
- Bumped CUDA base image version to 12.2.2.
Full Changelog: v1.14.1...v1.14.2
v1.14.2
What's Changed
- Fix bug on Tegra-based systems where symlinks were not created in containers.
- Add --csv.ignore-pattern command line option to nvidia-ctk cdi generate command.
Changes in livnvidia-container
- Bumped version to
v1.14.2for NVIDIA Container Toolkit release
Full Changelog: v1.14.1...v1.14.2
v1.14.1
What's Changed
- Fixed bug where contents of
/etc/nvidia-container-runtime/config.tomlis ignored by the NVIDIA Container Runtime Hook.
Changes in libnvidia-container
- Use
libelf.sofromelfutils-libelf-develon RPM-based systems due to removed mageia repositories hosting pmake and bmake.
Full Changelog: v1.14.0...v1.14.1
v1.14.0
This is a promotion of the (internal) v1.14.0-rc.3 release to GA.
This release of the NVIDIA Container Toolkit adds the following features:
- Improved support for the Container Device Interface (CDI) on Tegra-based systems
- Simplified packaging and distribution. We now only generate
.deband.rpmpackages that are compatible with all supported distributions instead of releasing distributions-specific packagfes.
NOTE: This will be the last release that includes the nvidia-container-runtime and nvidia-docker2 packages.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.14.0nvidia-container-toolkit 1.14.0nvidia-container-runtime 3.14.0nvidia-docker2 2.14.0
The packages for this release are published to the libnvidia-container package repositories.
New Contributors
- @elliotcourant made their first contribution in #61
Full Changelog: v1.13.0...v1.14.0
v1.14.0-rc.3
- Added support for generating OCI hook JSON file to
nvidia-ctk runtime configurecommand. - Remove installation of OCI hook JSON from RPM package.
- Refactored config for
nvidia-container-runtime-hook. - Added a
nvidia-ctk configcommand which supports setting config options using a--setflag. - Added
--library-search-pathoption tonvidia-ctk cdi generatecommand incsvmode. This allows folders where
libraries are located to be specified explicitly. - Updated go-nvlib to support devices which are not present in the PCI device database. This allows the creation of dev/char symlinks on systems with such devices installed.
- Added
UsesNVGPUModuleinfo function for more robust platform detection. This is required on Tegra-based systems where libnvidia-ml.so is also supported.
Changes from libnvidia-container v1.14.0-rc.3
- Generate the
nvc.hheader file automaticallty so that version does not need to be explicitly bumped.
Changes in the toolkit-container
- Set
NVIDIA_VISIBLE_DEVICES=voidto prevent injection of NVIDIA devices and drivers into the NVIDIA Container Toolkit container.
v1.14.0-rc.2
- Fix bug causing incorrect nvidia-smi symlink to be created on WSL2 systems with multiple driver roots.
- Remove dependency on coreutils when installing package on RPM-based systems.
- Create ouput folders if required when running
nvidia-ctk runtime configure - Generate default config as post-install step.
- Added support for detecting GSP firmware at custom paths when generating CDI specifications.
- Added logic to skip the extraction of image requirements if
NVIDIA_DISABLE_REQUIRESis set totrue.
Changes from libnvidia-container v1.14.0-rc.2
- Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.
Changes in the toolkit-container
- Ensure that common envvars have higher priority when configuring the container engines.
- Bump CUDA base image version to 12.2.0.
- Remove installation of nvidia-experimental runtime. This is superceded by the NVIDIA Container Runtime in CDI mode.
v1.14.0-rc.1
- chore(cmd): Fixing minor spelling error. by @elliotcourant in #61
- Add support for updating containerd configs to the
nvidia-ctk runtime configurecommand. - Create file in
etc/ld.so.conf.dwith permissions644to support non-root containers. - Generate CDI specification files with
644permissions to allow rootless applications (e.g. podman) - Add
nvidia-ctk cdi listcommand to show the known CDI devices. - Add support for generating merged devices (e.g.
alldevice) to the nvcdi API. - Use . pattern to locate libcuda.so when generating a CDI specification to support platforms where a patch version is not specified.
- Update go-nvlib to skip devices that are not MIG capable when generating CDI specifications.
- Add
nvidia-container-runtime-hook.pathconfig option to specify NVIDIA Container Runtime Hook path explicitly. - Fix bug in creation of
/dev/charsymlinks by failing operation if kernel modules are not loaded. - Add option to load kernel modules when creating device nodes
- Add option to create device nodes when creating
/dev/charsymlinks
Changes from libnvidia-container v1.14.0-rc.1
- Support OpenSSL 3 with the Encrypt/Decrypt library
Changes in the toolkit-container
- Bump CUDA base image version to 12.1.1.
- Unify environment variables used to configure runtimes.
v1.14.0-rc.2
What's Changed
- Fix bug causing incorrect nvidia-smi symlink to be created on WSL2 systems with multiple driver roots.
- Remove dependency on coreutils when installing package on RPM-based systems.
- Create ouput folders if required when running
nvidia-ctk runtime configure - Generate default config as post-install step.
- Added support for detecting GSP firmware at custom paths when generating CDI specifications.
- Added logic to skip the extraction of image requirements if
NVIDIA_DISABLE_REQUIRESis set totrue.
Changes from libnvidia-container v1.14.0-rc.2
- Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.
Changes in the toolkit-container
- Ensure that common envvars have higher priority when configuring the container engines.
- Bump CUDA base image version to 12.2.0.
- Remove installation of nvidia-experimental runtime. This is superceded by the NVIDIA Container Runtime in CDI mode.
Full Changelog: v1.14.0-rc.1...v1.14.0-rc.2
v1.13.5
What's Changed
- Remove dependency on
coreutilswhen installing the NVIDIA Container Toolkit on RPM-based systems. - Added support for detecting GSP firmware at custom paths when generating CDI specifications.
Changes in libnvidia-container
- Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.
Full Changelog: v1.13.4...v1.13.5
v1.13.4
This release only bumps the CUDA Base Image version in the toolkit-container component.
What's Changed
Changes in the toolkit-container
- Bump CUDA base image version to 12.2.0.
Full Changelog: v1.13.3...v1.13.4