Skip to content

fix(chart): derive ld.so.preload from devicePlugin.libPath to fix non-default path deployments#1714

Open
ilia-medvedev wants to merge 1 commit intoProject-HAMi:masterfrom
ilia-medvedev:fix/ldsopreload-libpath
Open

fix(chart): derive ld.so.preload from devicePlugin.libPath to fix non-default path deployments#1714
ilia-medvedev wants to merge 1 commit intoProject-HAMi:masterfrom
ilia-medvedev:fix/ldsopreload-libpath

Conversation

@ilia-medvedev
Copy link
Copy Markdown

@ilia-medvedev ilia-medvedev commented Mar 26, 2026

Fixes #1713, #971

vgpu-init.sh copies /k8s-vgpu/lib/nvidia/ld.so.preload to devicePlugin.libPath on the
host, but the file in the image hardcodes /usr/local/vgpu/libvgpu.so. When libPath is set
to anything else (e.g. /var/lib/hami/vgpu on Bottlerocket EKS where /usr/local is
read-only), the copied file points to the wrong path and libvgpu.so fails to preload in
workload containers on every device plugin restart.

Fix: add ld.so.preload as a data key in the existing device-plugin ConfigMap, rendered
from {{ .Values.devicePlugin.libPath }}/libvgpu.so, and mount it at
/k8s-vgpu/lib/nvidia/ld.so.preload via subPath on the existing deviceconfig volume.
vgpu-init.sh's MD5 check then picks up the correct path from the ConfigMap instead of the
hardcoded image value. No new Kubernetes resources needed.

…-default path deployments

vgpu-init.sh copies /k8s-vgpu/lib/nvidia/ld.so.preload to devicePlugin.libPath on the host,
but the file in the image hardcodes /usr/local/vgpu/libvgpu.so regardless of the configured
libPath. On systems where libPath must be changed (e.g. Bottlerocket EKS nodes where /usr/local
is read-only), the copied ld.so.preload points to the wrong path, causing libvgpu.so to fail
to preload in every workload container after a device plugin pod restart.

Add ld.so.preload as a second data key in the existing device-plugin ConfigMap, rendered from
{{ .Values.devicePlugin.libPath }}/libvgpu.so. Mount it into the device-plugin container at
/k8s-vgpu/lib/nvidia/ld.so.preload using subPath on the existing deviceconfig volume. The
vgpu-init.sh MD5-based copy logic then picks up the correct path from the ConfigMap instead
of the image's hardcoded value.

Fixes Project-HAMi#1713

Signed-off-by: ilia-medvedev <ilia.medvedev@gong.io>
@hami-robot
Copy link
Copy Markdown
Contributor

hami-robot bot commented Mar 26, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ilia-medvedev
Once this PR has been reviewed and has the lgtm label, please assign fouof for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@hami-robot
Copy link
Copy Markdown
Contributor

hami-robot bot commented Mar 26, 2026

Welcome @ilia-medvedev! It looks like this is your first PR to Project-HAMi/HAMi 🎉

@github-actions github-actions bot added the kind/bug Something isn't working label Mar 26, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where the vgpu-init.sh script failed to correctly preload libvgpu.so when the devicePlugin.libPath was set to a non-default location. The fix involves making the ld.so.preload configuration dynamic, leveraging Kubernetes ConfigMaps to ensure the correct library path is always used, thereby improving the robustness and flexibility of deployments, especially in environments with read-only file systems.

Highlights

  • Dynamic ld.so.preload Path: The ld.so.preload content is now dynamically generated based on the devicePlugin.libPath value, resolving issues where the hardcoded path in the image caused failures in non-default libPath deployments.
  • ConfigMap Integration: A new ld.so.preload data key has been added to the existing device-plugin ConfigMap, which is templated to include the correct libvgpu.so path.
  • DaemonSet Volume Mount: The daemonsetnvidia.yaml has been updated to mount this new ld.so.preload entry from the ConfigMap into the container at /k8s-vgpu/lib/nvidia/ld.so.preload, ensuring vgpu-init.sh picks up the correct path.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@hami-robot hami-robot bot added the size/XS label Mar 26, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request modifies the Hami device plugin Helm chart. It adds an ld.so.preload entry to configmap.yaml, which specifies the path to libvgpu.so using a Helm value. Correspondingly, daemonsetnvidia.yaml is updated to mount this ld.so.preload configuration into the container at /k8s-vgpu/lib/nvidia/ld.so.preload as a read-only volume. This change likely enables the preloading of libvgpu.so for the NVIDIA device plugin. There is no feedback to provide as no review comments were made.

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 30, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

Flag Coverage Δ
unittests 51.90% <ø> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@archlitchi
Copy link
Copy Markdown
Member

archlitchi commented Mar 30, 2026

Thanks for the fix, have you tested in your cluster?

@ilia-medvedev
Copy link
Copy Markdown
Author

@archlitchi I have indeed. Works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ld.so.preload hardcodes /usr/local/vgpu regardless of devicePlugin.libPath, breaks on Bottlerocket EKS

2 participants