Skip to content

Conversation

@cliffburdick
Copy link
Contributor

@cliffburdick cliffburdick commented Nov 26, 2025

Summary by CodeRabbit

  • New Features

    • Added DPDK-first default build target and separate targets for GPUNetIO and Rivermax.
    • Introduced an upstream DPDK build flow and optional GPUNetIO/DOCA flow.
  • Chores

    • Updated DPDK requirement to 24.11.3+ and DOCA requirement to 2.8+ for GPUNetIO.
    • Reorganized multi-stage build pipeline for clearer, modular target flows.
  • Documentation

    • Clarified target usage and requirements in README.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 26, 2025

Walkthrough

Restructures the Docker multi-stage build to default to a dpdk path, adds dpdk-build, gpunetio-deps, gpunetio and rivermax stages, removes DOCA from the default path, updates dependency versions, and refactors CMake DPDK discovery and CUDA linkage across managers.

Changes

Cohort / File(s) Summary
Docker Build Infrastructure
operators/advanced_network/Dockerfile
Renamed base stage to base-deps; removed DOCA/GPUNetIO APT setup from default path; replaced large DOCA package groups with a lean base set; added dpdk-build stage to fetch/build upstream DPDK; added dpdk (default), gpunetio-deps, gpunetio, and reworked rivermax stages; clarified target flow and default target.
Documentation
operators/advanced_network/README.md
Updated requirements: DPDK -> 24.11.3+; DOCA -> 2.8+ for GPUNetIO; clarified MOFED note.
Top-level CMake
operators/advanced_network/advanced_network/CMakeLists.txt
Added CUDAToolkit requirement; introduced DPDK_PATH_UPSTREAM and DPDK_PATH_MLNX; expanded PKG_CONFIG_PATH to include upstream/MLNX DPDK and DOCA paths; changed default ANO_MGR from "dpdk gpunetio" to "dpdk" and adjusted comments.
DPDK manager CMake
operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt
Consolidated DPDK discovery into a unified pkg-config flow with a Mellanox fallback using architecture-aware PKG_CONFIG_PATH; added message(STATUS ...) for discovered DPDK; introduced DPDK_EXTRA_LIBS and consolidated target linking/includes/options; added CUDA::cuda_driver to PRIVATE linkage and set CUDA target properties (CUDA_SEPARABLE_COMPILATION, CUDA_RESOLVE_DEVICE_SYMBOLS, CUDA_ARCHITECTURES).
GPUNetIO manager CMake
operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt
Added CUDA::cudart and CUDA::cuda_driver to the target PRIVATE linkage.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

  • Pay particular attention to:
    • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt — DPDK discovery refactor, MLNX fallback, and new link flags.
    • operators/advanced_network/Dockerfile — multi-stage flow, dpdk-build compile steps, and architecture-aware package handling.
    • Consistency of new CMake variables (DPDK_PATH_UPSTREAM, DPDK_PATH_MLNX) and updated PKG_CONFIG_PATH usage across CMake files.
    • CUDA property additions (CUDA_SEPARABLE_COMPILATION, CUDA_RESOLVE_DEVICE_SYMBOLS, CUDA_ARCHITECTURES) and their impact on build/link behavior.

Pre-merge checks

✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically identifies the main change: adding support for upstream DPDK in the Dockerfile, which aligns with the primary modifications across multiple build configuration files.

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt (1)

1-1: Update copyright header to include current year 2025.

The file header shows "Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES" but should include 2025 as the current year per pipeline compliance check.

Apply this diff:

-# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
operators/advanced_network/advanced_network/CMakeLists.txt (1)

1-1: Update copyright header to include current year 2025.

The file header shows "Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES" but should include 2025 as the current year per pipeline compliance check.

Apply this diff:

-# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt (1)

1-1: Update copyright header to include current year 2025.

The file header shows "Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES" but should include 2025 as the current year per pipeline compliance check.

Apply this diff:

-# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
🧹 Nitpick comments (1)
operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt (1)

44-54: Verify necessity of CUDA linkage in CPU-only DPDK manager.

The dpdk manager links CUDA::cudart and CUDA::cuda_driver (lines 52-53), but dpdk is a CPU-only backend without GPU kernel execution. While this may be intentional for consistency (similar to bundling patterns in video_streaming_server per learnings), it warrants verification. If dpdk truly doesn't require GPU access, removing CUDA linkage would align the CPU backend's dependencies more accurately.

Consider either:

  1. If CUDA is unnecessary for dpdk: Remove lines 52-53 to keep the CPU backend's dependency footprint minimal.
  2. If CUDA is needed for consistency: Add a comment explaining the rationale, e.g., # CUDA linkage for consistency with gpunetio backend.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 19a457b and 2a3e1e6.

📒 Files selected for processing (5)
  • operators/advanced_network/Dockerfile (4 hunks)
  • operators/advanced_network/README.md (1 hunks)
  • operators/advanced_network/advanced_network/CMakeLists.txt (3 hunks)
  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt (1 hunks)
  • operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
operators/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

CMakeLists.txt must include build system integration using add_holohub_operator() for operators

Files:

  • operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt
  • operators/advanced_network/advanced_network/CMakeLists.txt
  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt
{operators,applications,workflows,gxf_extensions,tutorials}/**/README.md

📄 CodeRabbit inference engine (CONTRIBUTING.md)

README.md must include purpose, usage instructions, requirements, examples, and architecture overview

Files:

  • operators/advanced_network/README.md
**/*.md

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Use Holoscan SDK glossary terms when referring to HoloHub-specific locations in documentation

Files:

  • operators/advanced_network/README.md
🧠 Learnings (5)
📚 Learning: 2025-10-22T23:47:42.896Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_server/cpp/CMakeLists.txt:109-111
Timestamp: 2025-10-22T23:47:42.896Z
Learning: In the video streaming server application (applications/video_streaming/video_streaming_server/cpp/CMakeLists.txt), bundling libcudart.so.12 from the NGC operator binaries is intentional to ensure consistency with NGC binaries, even though the target links to CUDA::cudart.

Applied to files:

  • operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt
📚 Learning: 2025-11-24T16:28:06.281Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.281Z
Learning: Applies to **CMakeLists.txt : Use CMake target dependencies: add DEPENDS EXTENSIONS for operators wrapping GXF extensions, add DEPENDS OPERATORS for applications/workflows

Applied to files:

  • operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt
  • operators/advanced_network/advanced_network/CMakeLists.txt
  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt
📚 Learning: 2025-11-24T16:28:06.280Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.280Z
Learning: Applies to operators/**/CMakeLists.txt : CMakeLists.txt must include build system integration using add_holohub_operator() for operators

Applied to files:

  • operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt
  • operators/advanced_network/advanced_network/CMakeLists.txt
📚 Learning: 2025-11-24T16:28:06.280Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.280Z
Learning: Applies to {applications,workflows}/**/CMakeLists.txt : CMakeLists.txt must include build system integration using add_holohub_application() for applications and workflows

Applied to files:

  • operators/advanced_network/advanced_network/CMakeLists.txt
📚 Learning: 2025-10-22T16:37:54.211Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_server/cpp/CMakeLists.txt:50-52
Timestamp: 2025-10-22T16:37:54.211Z
Learning: When an application needs to bundle a specific version of a library (e.g., OpenSSL 3.x) that differs from the system version (e.g., OpenSSL 1.1.x), linking directly to absolute .so file paths combined with RPATH configuration is the correct approach. Do not suggest using find_package() in these cases, as it would find the system version instead of the bundled version.

Applied to files:

  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt
🪛 GitHub Actions: Check Compliance
operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt

[error] 1-1: Copyright header incomplete: current year not included in the header.

operators/advanced_network/advanced_network/CMakeLists.txt

[error] 1-1: Copyright header incomplete: current year not included in the header.

operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt

[error] 1-1: Copyright header incomplete: current year not included in the header.

🔇 Additional comments (9)
operators/advanced_network/README.md (1)

1-32: Documentation updates align with version changes.

The Requirements section properly reflects the updated DPDK 24.11.3 and DOCA 2.8 versions. The README is well-structured with comprehensive examples, architecture overview (Managers section), and configuration parameters. Requirements per coding guidelines are met: purpose (lines 19-22), usage instructions (example configs provided), requirements (lines 24-31), examples (YAML configs in sections 265-320, 446-492), and architecture overview (Managers section 59-96).

operators/advanced_network/Dockerfile (3)

125-180: DPDK multi-stage build structure is sound.

The new dpdk-build stage properly downloads DPDK 24.11.3 from the official repository, builds with meson/ninja, and installs to /usr/local. The driver disabling list (line 150) excludes mlx5, ensuring ConnectX-6 support is compiled. DOCA repo configuration and mlnx-ofed-kernel-utils installation correctly support both the default dpdk and gpunetio targets. Architecture-aware path configuration is correct.


158-178: DOCA repo setup is appropriately placed for reuse.

Configuring the DOCA APT repository in dpdk-build allows gpunetio-deps to inherit it without duplication. Architecture detection and repo link construction are correct. The mlnx-ofed-kernel-utils installation provides the ibdev2netdev utility required by system tuning scripts.


182-297: Stage hierarchy correctly separates default DPDK from optional DOCA/GPUNetIO.

The three-tier design—dpdk-build as the common base, dpdk as the streamlined default, and gpunetio-depsgpunetio for GPU acceleration—cleanly separates concerns. The --target specifier allows users to select GPUNetIO or Rivermax without pulling unnecessary DOCA dependencies into the default build. The stage descriptions (lines 183-186, 211-214, 292-295) clearly document the purpose of each target.

operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt (1)

67-68: CUDA linkage is appropriate for GPU-accelerated GPUNetIO backend.

Adding CUDA::cudart and CUDA::cuda_driver to the private linkage is correct since GPUNetIO executes CUDA kernels (adv_network_doca_kernels.cu) for GPU-to-NIC communication and requires direct GPU memory/kernel access via the CUDA driver.

operators/advanced_network/advanced_network/CMakeLists.txt (3)

23-23: Verify whether CUDAToolkit should be conditionally required.

Marking CUDAToolkit as globally REQUIRED forces CUDA availability for all backends, but the dpdk backend (the new default) is CPU-only and does not intrinsically require CUDA. Only the gpunetio backend requires GPU execution. Consider making CUDAToolkit required only when gpunetio is in the ANO_MGR_LIST, or ensure the global requirement is intentional for downstream consistency.


36-44: PKG_CONFIG_PATH configuration properly supports both upstream and Mellanox DPDK.

The path configuration distinguishes between upstream DPDK (installed to /usr/local by default) and Mellanox-packaged DPDK (/opt/mellanox/dpdk), with architecture-aware subdirectory handling. The prioritized search order (upstream first, then Mellanox) is sensible. DOCA and related paths are appropriately included.


82-85: Default manager changed appropriately to dpdk-only.

Changing the default ANO_MGR from "dpdk gpunetio" to "dpdk" aligns with the Dockerfile's new structure where dpdk is the default target and gpunetio requires explicit opt-in via --target gpunetio. The comment correctly notes that DOCA is not included in the default, reducing dependencies for CPU-only use cases.

operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt (1)

23-34: Unified pkg-config DPDK discovery improves robustness.

Consolidating upstream and Mellanox DPDK discovery into a single pkg-config query (with fallback to Mellanox paths) is cleaner than previous branching logic. The architecture-aware fallback path and status messages aid debugging. The quiet first attempt followed by informative fallback messaging is appropriate.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Nov 26, 2025

Greptile Overview

Greptile Summary

This PR upgrades the Advanced Network operator from using Mellanox's DPDK distribution (mlnx-dpdk) to upstream DPDK 24.11.3, providing better flexibility and reducing proprietary dependencies.

Major Changes:

  • Implemented multi-stage Docker build that compiles upstream DPDK from source using meson/ninja
  • Separated build targets: dpdk (default, no DOCA), gpunetio (includes DOCA SDK), and rivermax (unchanged)
  • Updated CMake configuration to search for upstream DPDK first, then fall back to mlnx-dpdk
  • Changed default manager from both dpdk and gpunetio to dpdk only, reducing unnecessary DOCA dependencies
  • Added explicit CUDA driver library linkage for both DPDK and GPUNetIO managers

Build Architecture:
The new build pipeline follows: base-depsdpdk-builddpdk (default) or gpunetio-depsgpunetio. The Rivermax target branches directly from base-deps since it doesn't require DPDK.

Compatibility:
The changes maintain backward compatibility through the unified CMake detection logic that supports both upstream and Mellanox DPDK distributions.

Confidence Score: 4/5

  • This PR is safe to merge with minor testing recommended for upstream DPDK compatibility
  • The changes are well-structured and thoroughly documented. The multi-stage build approach is clean, and the CMake configuration properly handles both upstream and mlnx-dpdk. The copyright year updates are correct. Score reduced from 5 to 4 because this is a significant infrastructure change (DPDK 22.11 → 24.11.3) that warrants runtime validation with actual hardware to ensure upstream DPDK works correctly with Mellanox ConnectX NICs.
  • operators/advanced_network/Dockerfile requires testing to verify the DPDK build and Mellanox driver compatibility with upstream DPDK 24.11.3

Important Files Changed

File Analysis

Filename Score Overview
operators/advanced_network/Dockerfile 4/5 Restructured multi-stage build to support upstream DPDK 24.11.3, adds new dpdk-build stage with meson/ninja build process, separates GPUNetIO into dedicated target
operators/advanced_network/advanced_network/CMakeLists.txt 5/5 Added CUDAToolkit dependency, updated PKG_CONFIG_PATH for upstream DPDK support, changed default ANO_MGR from 'dpdk gpunetio' to 'dpdk' only
operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt 4/5 Unified DPDK detection logic for both upstream and mlnx-dpdk, added CUDA::cuda_driver linkage, always includes Mellanox NIC support libraries

Sequence Diagram

sequenceDiagram
    participant User
    participant Docker as Docker Build
    participant Base as base-deps
    participant DPDK as dpdk-build
    participant Default as dpdk (default)
    participant GPU as gpunetio
    participant CMake as CMake Config
    
    User->>Docker: docker build (no --target)
    Docker->>Base: Install base dependencies
    Note over Base: meson, ninja, libnuma<br/>libibverbs, libmlx5-1
    Docker->>DPDK: Build upstream DPDK
    Note over DPDK: Download dpdk-24.11.3.tar.xz<br/>meson + ninja build<br/>Install to /usr/local
    DPDK->>DPDK: Configure DOCA repo
    DPDK->>DPDK: Install mlnx-ofed-kernel-utils
    Docker->>Default: Create default target
    Note over Default: Ready with upstream DPDK<br/>No DOCA packages
    
    User->>Docker: docker build --target gpunetio
    Docker->>Base: Install base dependencies
    Docker->>DPDK: Build upstream DPDK
    DPDK->>GPU: Install DOCA SDK packages
    Note over GPU: libdoca-sdk-gpunetio-dev<br/>libdoca-sdk-dma-dev<br/>libdoca-sdk-eth-dev<br/>libdoca-sdk-flow-dev
    Docker->>GPU: Create gpunetio target
    Note over GPU: Ready with DPDK + DOCA
    
    User->>CMake: cmake build
    CMake->>CMake: Search PKG_CONFIG_PATH
    Note over CMake: /usr/local/lib/pkgconfig (upstream)<br/>/opt/mellanox/dpdk (mlnx)
    CMake->>CMake: Find upstream DPDK first
    CMake->>CMake: Link with mlx5, ibverbs
    CMake->>CMake: Build ANO_MGR=dpdk (default)
    Note over CMake: Or ANO_MGR="dpdk gpunetio"
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
operators/advanced_network/Dockerfile (1)

218-289: Based on my web search findings, I now have sufficient information to rewrite the review comment. Here are my key findings:

Key Findings:

  1. Rivermax does NOT depend on DPDK — Official NVIDIA docs confirm Rivermax dependencies are the Rivermax SDK/driver, DOCA RMAX libraries, and Mellanox OFED. DPDK is used in other DOCA workflows but not for Rivermax. Extending from base-deps is correct.

  2. CMake build flags are correct — The flags RMAX_CUDA=ON and RMAX_TEGRA match official Rivermax SDK documentation.

  3. Critical bug found in .deb filename pattern — According to NVIDIA's official documentation, the .deb packages inside the tarball are named rivermax_<version>.deb (e.g., rivermax_1.70.deb), NOT rivermax_<version>_<arch>.deb. The files are located in architecture-specific directories (deb-dist/aarch64/ or deb-dist/amd64/), not in the filename. The find command on line 260 will fail because it's searching for the wrong filename pattern.


Verify TARGETARCH is used for directory path selection, not .deb filename suffix.

The .deb package inside the Rivermax tarball is named rivermax_<version>.deb (e.g., rivermax_1.70.deb) and located in architecture-specific directories: deb-dist/aarch64/rivermax_1.70.deb or deb-dist/amd64/rivermax_1.70.deb. The current find command searches for rivermax_${RIVERMAX_VERSION}_${TARGETARCH}.deb, which will not match any files. The correct approach is to use TARGETARCH to navigate the directory structure, not to match the filename suffix. Rivermax does not require DPDK or DOCA libraries and extending from base-deps is appropriate.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2a3e1e6 and e1cd232.

📒 Files selected for processing (5)
  • operators/advanced_network/Dockerfile (4 hunks)
  • operators/advanced_network/README.md (1 hunks)
  • operators/advanced_network/advanced_network/CMakeLists.txt (4 hunks)
  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt (2 hunks)
  • operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • operators/advanced_network/advanced_network/managers/gpunetio/CMakeLists.txt
  • operators/advanced_network/README.md
  • operators/advanced_network/advanced_network/CMakeLists.txt
🧰 Additional context used
📓 Path-based instructions (1)
operators/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

CMakeLists.txt must include build system integration using add_holohub_operator() for operators

Files:

  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt
🧠 Learnings (2)
📚 Learning: 2025-11-24T16:28:06.281Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.281Z
Learning: Applies to **CMakeLists.txt : Use CMake target dependencies: add DEPENDS EXTENSIONS for operators wrapping GXF extensions, add DEPENDS OPERATORS for applications/workflows

Applied to files:

  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt
📚 Learning: 2025-10-22T16:37:54.211Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_server/cpp/CMakeLists.txt:50-52
Timestamp: 2025-10-22T16:37:54.211Z
Learning: When an application needs to bundle a specific version of a library (e.g., OpenSSL 3.x) that differs from the system version (e.g., OpenSSL 1.1.x), linking directly to absolute .so file paths combined with RPATH configuration is the correct approach. Do not suggest using find_package() in these cases, as it would find the system version instead of the bundled version.

Applied to files:

  • operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt
🔇 Additional comments (8)
operators/advanced_network/advanced_network/managers/dpdk/CMakeLists.txt (3)

56-58: CUDA target properties configuration looks good.

The CUDA separable compilation, symbol resolution, and architecture specifications (sm_80, sm_90) are appropriate for modern NVIDIA GPUs and align well with the PR's CUDA integration enhancements.


61-61: Clarify potential duplication of DPDK compiler flags.

Line 61 applies DPDK_CFLAGS (which typically includes all flags from pkg-config), while line 41 already applies DPDK_CFLAGS_OTHER and line 40 applies DPDK_INCLUDE_DIRS. This may cause flag duplication:

  • DPDK_CFLAGS_OTHER (line 41): Compiler flags excluding includes
  • DPDK_CFLAGS (line 61): All flags including includes
  • DPDK_INCLUDE_DIRS (line 40): Already applied via target_include_directories()

Verify whether DPDK_CFLAGS on line 61 adds necessary flags beyond what's already applied, or if it should be removed to avoid redundancy and potential flag conflicts.


15-21: I'm encountering persistent repository cloning issues that prevent me from accessing the codebase to verify the presence of add_holohub_operator() in parent CMakeLists.txt files or to check for the other concerns mentioned.

However, based on the learnings provided and the coding guidelines in scope:

The original review comment is valid and correctly identifies the concern: The coding guideline clearly states that operators/**/CMakeLists.txt files must include build system integration using add_holohub_operator(). The provided code snippet (lines 15-21) does not show this call.

The review comment appropriately requests verification of whether this call exists in:

  • The current file (below line 21)
  • Parent CMakeLists.txt files in the directory hierarchy

Since I cannot complete the verification due to repository access issues, I must mark this as requiring manual verification:


Verify add_holohub_operator() build system integration per coding guidelines.

The coding guidelines specify that operator CMakeLists.txt files must include build system integration using add_holohub_operator(). This call is not visible in the provided code segment (lines 15-21).

Please verify that add_holohub_operator() is being called for the ${PROJECT_NAME} target, either in this file (lines 22+) or in a parent CMakeLists.txt file in the operators/advanced_network/advanced_network/managers/ or operators/advanced_network/advanced_network/ or operators/advanced_network/ directories.

operators/advanced_network/Dockerfile (5)

78-114: Approve the restructured base-deps stage with DPDK-focused dependencies.

The removal of DOCA/GPUNetIO packages from the default path and addition of DPDK build dependencies (meson, python3-pyelftools, libnuma-dev, libibverbs-dev, librdmacm-dev, libmlx5-1, ibverbs-utils) is well-structured. The clear comments justify each dependency addition.


182-189: Approve the dpdk stage as the new default target.

The dpdk stage correctly extends dpdk-build and serves as the clean, minimal default target. DPDK is ready to use after the dpdk-build stage's ldconfig and artifact cleanup. This removes the DOCA/GPUNetIO dependency from the default path, which aligns with the PR objective.


190-217: Approve the gpunetio-deps and gpunetio stages for optional DOCA support.

The two-stage approach (gpunetio-deps installing DOCA packages, then gpunetio extending it) is clean and defers DOCA installation only when explicitly requested via --target gpunetio. The DOCA repository is already configured in dpdk-build, and the DOCA SDK packages are properly documented. This maintains backward compatibility for users who need GPUNetIO.


291-297: Approve the default stage change to dpdk.

Explicitly setting the default stage to dpdk is correct and aligns with the PR goal of supporting upstream DPDK as the default path, with DOCA/GPUNetIO as an optional target. The comments clearly document the available build targets.


125-180: Based on my verification:

All configuration elements verified and correct for deployment.

  1. DPDK 24.11.3: Confirmed as a valid LTS release available from fast.dpdk.org (announced August 18, 2025).

  2. Architecture mapping: Verified correct—DOCA 2.8.0 packages use x86_64 and arm64-sbsa architectures, matching your amd64x86_64 and arm64arm64-sbsa mappings exactly.

  3. Disabled drivers: Mellanox drivers (net/mlx4, net/mlx5) are not in the disabled list, so they remain enabled for ConnectX support. The disabled list removes only optional/specialty drivers while preserving core networking functionality.

The build configuration is sound for production use.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

1 participant