Skip to content

Conversation

@AndreasHeumann
Copy link
Contributor

@AndreasHeumann AndreasHeumann commented Nov 7, 2025

This example demonstrates real-time visualization of data from Holoscan applications using NATS messaging and web-based dashboards. It showcases how to stream tensor data from a Holoscan pipeline and visualize it dynamically in a browser.

Summary by CodeRabbit

  • New Features

    • Pipeline Visualization app (C++ & Python): time-series demo, optional NATS streaming, FlatBuffers messaging, Python bindings, and real-time web visualizers (static & dynamic) with an asynchronous NATS data logger and CLI entry points.
  • Documentation

    • Added user README, public metadata descriptors, example YAML configs, and API docs for the NATS logger and visualizers.
  • Chores

    • Build/integration updates, FlatBuffers schemas, Python requirements, helper scripts, VS Code run/tasks, and a Dockerfile for packaging.

✏️ Tip: You can customize this high-level summary in your review settings.

@AndreasHeumann AndreasHeumann self-assigned this Nov 7, 2025
@AndreasHeumann AndreasHeumann force-pushed the ah/pipeline_vis branch 2 times, most recently from f870b18 to bbc6ddf Compare November 11, 2025 14:36
@AndreasHeumann AndreasHeumann marked this pull request as ready for review November 11, 2025 14:37
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Nov 11, 2025

Greptile Overview

Greptile Summary

This PR introduces a comprehensive pipeline visualization example demonstrating real-time data streaming from Holoscan applications via NATS messaging with web-based visualization.

Major Changes:

  • Implemented C++ and Python Holoscan applications generating time-series sine wave data with modulation
  • Created custom NatsLogger data logger using FlatBuffers serialization for efficient messaging
  • Built two web visualizers (static and dynamic) using Dash/Plotly for real-time data plots
  • Added complete documentation, Docker setup, and helper scripts for easy deployment
  • Integrated with existing Holoscan AsyncDataLogger infrastructure

Code Quality:
The implementation demonstrates excellent software engineering practices with proper RAII patterns in C++, clean async handling in Python, comprehensive error handling, and thorough documentation. The C++ CUDA memory handling correctly uses async copies with pageable memory (no sync needed as discussed in previous threads). The Python visualizer's threading model is clean and the NATS client properly manages subscriptions.

Confidence Score: 5/5

  • This PR is safe to merge with no blocking issues
  • The code is production-ready with excellent architecture, comprehensive documentation, proper error handling, and no logic or security issues. All previously flagged concerns from earlier review threads have been addressed or clarified by the developer.
  • No files require special attention

Important Files Changed

File Analysis

Filename Score Overview
applications/pipeline_visualization/cpp/pipeline_visualization.cpp 5/5 C++ Holoscan application generating sine waves with modulation; clean implementation with proper error handling and NATS integration
applications/pipeline_visualization/cpp/nats_logger.cpp 5/5 NATS logger implementation with FlatBuffers serialization; proper RAII patterns and async backend handling
applications/pipeline_visualization/python/pipeline_visualization.py 5/5 Python Holoscan application mirroring C++ functionality; proper tensor handling and NATS logger integration
applications/pipeline_visualization/visualizer/nats_async.py 5/5 Async NATS client for Python visualizer; clean threading model with proper error handling
applications/pipeline_visualization/visualizer/tensor_to_numpy.py 5/5 FlatBuffers to NumPy conversion utility; proper handling of DLPack tensor format with shape/stride truncation
applications/pipeline_visualization/README.md 5/5 Comprehensive documentation with quickstart, architecture diagrams, and troubleshooting; well-structured user guide

Sequence Diagram

sequenceDiagram
    participant User
    participant SourceOp
    participant ModulateOp
    participant SinkOp
    participant NatsLogger
    participant NATS
    participant Visualizer
    
    User->>SourceOp: Start Application
    
    loop Every 50ms (20Hz)
        SourceOp->>SourceOp: Generate sine wave (3000 samples)
        SourceOp->>NatsLogger: Log output data
        SourceOp->>ModulateOp: Send tensor via "out" port
        
        ModulateOp->>NatsLogger: Log input data
        ModulateOp->>ModulateOp: Add 300Hz modulation
        ModulateOp->>NatsLogger: Log output data
        ModulateOp->>SinkOp: Send modulated tensor via "out" port
        
        SinkOp->>NatsLogger: Log input data
        SinkOp->>SinkOp: Receive and process
    end
    
    par Async Publishing (Rate Limited to 5Hz)
        NatsLogger->>NatsLogger: Serialize tensor to FlatBuffers
        NatsLogger->>NATS: Publish message to "subject_prefix.data"
    end
    
    par Web Visualizer (Polling every 200ms)
        Visualizer->>NATS: Subscribe to "subject_prefix.data"
        NATS->>Visualizer: Stream messages
        Visualizer->>Visualizer: Deserialize FlatBuffers
        Visualizer->>Visualizer: Convert to NumPy array
        Visualizer->>User: Update web dashboard graphs
    end
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@coderabbitai
Copy link

coderabbitai bot commented Nov 11, 2025

Warning

Rate limit exceeded

@bhashemian has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 4 minutes and 23 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 99a0884 and b9140bb.

📒 Files selected for processing (3)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds a new pipeline_visualization application (C++ and Python): FlatBuffers schemas, C++ and Python time-series apps, a NATS-backed AsyncDataLogger with pybind11 bindings, Dash visualizers and utilities, build/CMake integration, IDE tasks, Dockerfile, scripts, metadata, and documentation.

Changes

Cohort / File(s) Summary
IDE & Tasks
\.vscode/launch.json, \.vscode/tasks.json
Add two pipeline_visualization launch configurations (cpp, python) and two "Build pipeline_visualization" shell tasks.
Application registration
applications/CMakeLists.txt
Register pipeline_visualization via add_holohub_application(pipeline_visualization).
FlatBuffers & schemas
applications/pipeline_visualization/CMakeLists.txt, applications/pipeline_visualization/schemas/*
Fetch FlatBuffers, list tensor.fbs/message.fbs, generate code, create interface target pipeline_visualization_flatbuffers_target (alias), expose generated includes; always add cpp subdir and conditionally add python under HOLOHUB_BUILD_PYTHON.
C++ build targets & config
applications/pipeline_visualization/cpp/CMakeLists.txt, applications/pipeline_visualization/cpp/pipeline_visualization.yaml
Add nats_logger shared lib, pipeline_visualization executable, custom target to copy YAML, and link dependencies between targets.
C++ app & tensor helper
applications/pipeline_visualization/cpp/pipeline_visualization.cpp, applications/pipeline_visualization/cpp/create_tensor.hpp, applications/pipeline_visualization/cpp/create_tensor.cpp
Add TimeSeries C++ app (SourceOp/ModulateOp/SinkOp/TimeSeriesApp/main) and CreateTensor that serializes holoscan::Tensor → FlatBuffers Tensor with DLPack/CUDA handling and optional stream-based copies.
C++ NATS logger
applications/pipeline_visualization/cpp/nats_logger.hpp, applications/pipeline_visualization/cpp/nats_logger.cpp
Implement holoscan::data_loggers::NatsLogger (PIMPL) with nested AsyncNatsBackend, per-ID rate limiting, GXF-type extraction, and asynchronous NATS publish logic (new public API).
C++ metadata
applications/pipeline_visualization/cpp/metadata.json
Add C++ application metadata descriptor.
Python bindings & metadata
applications/pipeline_visualization/python/CMakeLists.txt, applications/pipeline_visualization/python/nats_logger_pybind.cpp, applications/pipeline_visualization/python/metadata.json
Add pybind11 _nats_logger module exposing NatsLogger via PyNatsLogger, CMake target wiring, and Python app metadata.
Python app & config
applications/pipeline_visualization/python/pipeline_visualization.py, applications/pipeline_visualization/python/pipeline_visualization.yaml
Add Python TimeSeries app mirroring the C++ app, optional NatsLogger integration, CLI entry point, and YAML config.
Python docs & pydoc
applications/pipeline_visualization/python/pydoc.hpp
Add PYDOC documentation entry describing NatsLogger parameters and behavior.
Python packaging & deps
applications/pipeline_visualization/requirements.txt, applications/pipeline_visualization/python/CMakeLists.txt
Add runtime Python dependency constraints and pybind11 module target linking to native nats_logger.
Visualizer modules & utilities
applications/pipeline_visualization/visualizer/*
Add Dash visualizers and utilities: graph_components.py, tensor_to_numpy.py, nats_async.py, styles.py, visualizer_dynamic.py, visualizer_static.py, plus helper scripts start_visualizer.sh, start_nats_server.sh.
Docker & packaging
applications/pipeline_visualization/Dockerfile
Add multi-stage Dockerfile stages to build/install the pipeline_visualization app and Python deps.
Docs & README
applications/pipeline_visualization/README.md
Add README describing architecture, components, usage, schemas, configuration, and troubleshooting.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Areas needing extra attention:
    • applications/pipeline_visualization/cpp/nats_logger.{hpp,cpp} — concurrency, async backend, NATS API usage, ownership, and error handling.
    • applications/pipeline_visualization/cpp/create_tensor.{hpp,cpp} — DLPack handling, CUDA host-copy semantics and optional stream use.
    • applications/pipeline_visualization/python/nats_logger_pybind.cpp — pybind11 argument mapping, ownership and lifetime across Python/C++.
    • applications/pipeline_visualization/visualizer/tensor_to_numpy.py and Dash modules — FlatBuffers parsing, dtype/stride reconstruction, and UI patch/update logic.
    • applications/pipeline_visualization/CMakeLists.txt — FlatBuffers generation, target exposure, and conditional python subdir wiring.

Possibly related PRs

Pre-merge checks

✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add pipeline visualization example' directly summarizes the main change: introducing a new pipeline visualization example application with NATS messaging and web-based dashboards.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

🧹 Nitpick comments (9)
applications/pipeline_visualization/README.md (2)

5-5: Add alt text to the image for accessibility.

The image reference lacks alt text, which is important for accessibility and when the image fails to load.

Apply this fix:

-![](pipeline_visualization.png)
+![Pipeline Visualization Architecture Diagram](pipeline_visualization.png)

140-149: Add language identifier to code fence.

The code block should specify bash or text as the language for proper syntax highlighting.

Apply this fix:

-```
+```text
 Usage: ./pipeline_visualization [options]
applications/pipeline_visualization/visualizer/nats_async.py (3)

74-74: Use explicit Optional type hints.

PEP 484 recommends using T | None instead of implicit Optional for clarity.

Apply these fixes:

-    def __init__(self, host: str, subscriptions: list[str] = None):
+    def __init__(self, host: str, subscriptions: list[str] | None = None):
-    async def _subscribe(self, subject: str, q: Queue = None):
+    async def _subscribe(self, subject: str, q: Queue | None = None):

Also applies to: 149-149


112-112: Use logging.exception for error logging in except blocks.

When logging errors within exception handlers, use logging.exception() instead of logging.error() to automatically include the stack trace.

Apply these fixes:

         except ConnectionRefusedError as e:
-            logger.error(f"Cannot connect to NATS: {e}")
+            logger.exception(f"Cannot connect to NATS: {e}")
         except TypeError:
-            logger.error(f"Cannot subscribe to {subject} since stream doesn't exist")
+            logger.exception(f"Cannot subscribe to {subject} since stream doesn't exist")
         except nats.errors.ConnectionClosedError:
-            logger.error("Failed to send message since NATS connect was closed")
+            logger.exception("Failed to send message since NATS connect was closed")

Also applies to: 163-163, 181-181


16-16: Remove unused noqa directive.

The noqa: E501 directive is not needed as the E501 check (line too long) is not enabled.

Apply this fix:

 limitations under the License.
-"""  # noqa: E501
+"""
applications/pipeline_visualization/visualizer/start_visualizer.sh (1)

21-21: Consider separating declaration and assignment for error handling.

Combining the export with command substitution can mask failures from the $(pwd) command. While unlikely to fail here, separating them improves robustness.

Apply this fix:

-export PYTHONPATH=${PYTHONPATH:-""}:$(pwd)/../../../build/applications/pipeline_visualization/flatbuffers/
+FLATBUFFERS_PATH=$(pwd)/../../../build/applications/pipeline_visualization/flatbuffers/
+export PYTHONPATH=${PYTHONPATH:-""}:$FLATBUFFERS_PATH
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

16-16: Remove unused noqa directive.

The noqa: E501 directive is unnecessary as there's no long line to suppress.

Apply this diff:

-"""  # noqa: E501
+"""
applications/pipeline_visualization/schemas/tensor.fbs (1)

20-24: Consider documenting limited type coverage.

The DLDataTypeCode enum covers the common types (int, uint, float) but omits others like kDLComplex (5) and kDLBfloat (4) that exist in the DLPack specification. For a visualization use case, this is likely sufficient, but consider adding a comment noting the intentional limitation.

applications/pipeline_visualization/visualizer/graph_components.py (1)

17-17: Drop the unused noqa directive.

Ruff flags this # noqa: E501, so we can remove it and keep lint clean.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9c10dac and bbc6ddf.

⛔ Files ignored due to path filters (1)
  • applications/pipeline_visualization/pipeline_visualization.png is excluded by !**/*.png
📒 Files selected for processing (30)
  • .vscode/launch.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • applications/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.py (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/pydoc.hpp (1 hunks)
  • applications/pipeline_visualization/requirements.txt (1 hunks)
  • applications/pipeline_visualization/schemas/message.fbs (1 hunks)
  • applications/pipeline_visualization/schemas/tensor.fbs (1 hunks)
  • applications/pipeline_visualization/start_nats_server.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/graph_components.py (1 hunks)
  • applications/pipeline_visualization/visualizer/nats_async.py (1 hunks)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/styles.py (1 hunks)
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/metadata.json

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/metadata.json: In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category
The category (first tag) in metadata.json must be one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Files:

  • applications/pipeline_visualization/cpp/metadata.json
  • applications/pipeline_visualization/python/metadata.json
applications/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Files:

  • applications/CMakeLists.txt
🧠 Learnings (20)
📓 Common learnings
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Each application must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to applications/CMakeLists.txt : Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/workflows/*/CMakeLists.txt : Each workflow must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/pkg/*/CMakeLists.txt : Each package must provide a CMakeLists.txt for package configuration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to pkg/CMakeLists.txt : Register packages in pkg/CMakeLists.txt using add_holohub_package(...)

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/gxf_extensions/*/CMakeLists.txt : Each GXF extension must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to workflow/CMakeLists.txt : Register workflows in workflow/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/metadata.json : Each application must include a metadata.json that follows the application schema

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
  • applications/pipeline_visualization/python/metadata.json
📚 Learning: 2025-10-30T19:04:41.239Z
Learnt from: bhashemian
Repo: nvidia-holoscan/holohub PR: 1203
File: applications/video_streaming/video_streaming_server/cpp/metadata.json:42-51
Timestamp: 2025-10-30T19:04:41.239Z
Learning: In the video_streaming application structure, umbrella metadata (applications/video_streaming/metadata.json) uses language-qualified mode names like "server_cpp" and "server_python" to distinguish between implementations, while component-specific metadata files (e.g., applications/video_streaming/video_streaming_server/cpp/metadata.json) use simple mode names like "server" since they are already scoped to a specific language implementation by their directory structure.

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-20T17:54:32.001Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-20T17:54:32.001Z
Learning: Applies to **/metadata.json : The category (first tag) in metadata.json must be one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-20T17:54:32.001Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-20T17:54:32.001Z
Learning: Applies to **/metadata.json : In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/README.md : Include performance insights, considerations, and benchmarking results in application README when relevant

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/README.md : Each application must include a README.md describing purpose and architecture

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/README.md
  • applications/pipeline_visualization/python/pipeline_visualization.py
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to operators/CMakeLists.txt : Register operators in operators/CMakeLists.txt using add_holohub_operator(...) and optional DEPENDS EXTENSIONS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Applications should include a testing section in CMakeLists.txt using CTest for functional tests

Applied to files:

  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to gxf_extensions/CMakeLists.txt : Register extensions in gxf_extensions/CMakeLists.txt using add_holohub_extension(...)

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/operators/*/*.{py,cpp} : Main operator filename should match the directory name with the appropriate extension

Applied to files:

  • applications/CMakeLists.txt
🧬 Code graph analysis (12)
applications/pipeline_visualization/python/pydoc.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (2)
operators/grpc_operators/python/common/tensor_proto.py (1)
  • DLDataTypeCode (46-51)
operators/medical_imaging/core/io_type.py (1)
  • IOType (19-25)
applications/pipeline_visualization/start_nats_server.sh (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (217-231)
applications/pipeline_visualization/python/nats_logger_pybind.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
applications/pipeline_visualization/python/pipeline_visualization.py (4)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (13)
  • spec (64-64)
  • spec (64-64)
  • spec (135-138)
  • spec (135-135)
  • spec (198-198)
  • spec (198-198)
  • op_input (140-181)
  • op_input (140-141)
  • op_input (200-206)
  • op_input (200-201)
  • TimeSeriesApp (230-235)
  • main (278-371)
  • main (278-278)
applications/pipeline_visualization/cpp/nats_logger.cpp (4)
  • setup (281-296)
  • setup (281-281)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (217-231)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.cpp (6)
  • initialize (304-312)
  • initialize (304-304)
  • initialize (481-483)
  • initialize (481-481)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/nats_logger.hpp (4)
  • spec (54-54)
  • data (72-76)
  • data (123-127)
  • tensor (89-93)
applications/pipeline_visualization/visualizer/visualizer_static.py (4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-181)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (4)
  • Visualizer (34-244)
  • update_subject (102-124)
  • update_source (134-228)
  • run (230-244)
applications/pipeline_visualization/cpp/nats_logger.cpp (3)
applications/pipeline_visualization/cpp/nats_logger.hpp (9)
  • data (72-76)
  • data (123-127)
  • spec (54-54)
  • tensor (89-93)
  • tensor_map (106-110)
  • AsyncNatsBackend (145-145)
  • AsyncNatsBackend (146-146)
  • entry (168-168)
  • entry (178-178)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
  • CreateTensor (25-111)
  • CreateTensor (25-27)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (55-57)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (3)
  • tensor (89-93)
  • data (72-76)
  • data (123-127)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (55-57)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (3)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-181)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/nats_async.py (3)
applications/ehr_query_llm/lmm/riva_asr.py (1)
  • put (253-254)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (217-231)
🪛 Biome (2.1.2)
.vscode/launch.json

[error] 1286-1286: Expected a property but instead found '}'.

Expected a property here.

(parse)


[error] 1300-1300: Expected a property but instead found '}'.

Expected a property here.

(parse)

🪛 Clang (14.0.6)
applications/pipeline_visualization/python/pydoc.hpp

[error] 21-21: 'macros.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/nats_logger_pybind.cpp

[error] 18-18: 'pybind11/pybind11.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/create_tensor.hpp

[error] 21-21: 'flatbuffers/tensor_generated.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/pipeline_visualization.cpp

[error] 32-32: 'getopt.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/nats_logger.hpp

[error] 21-21: 'holoscan/core/component_spec.hpp' file not found

(clang-diagnostic-error)

🪛 markdownlint-cli2 (0.18.1)
applications/pipeline_visualization/README.md

5-5: Images should have alternate text (alt text)

(MD045, no-alt-text)


128-128: Bare URL used

(MD034, no-bare-urls)


140-140: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


158-158: Bare URL used

(MD034, no-bare-urls)


195-195: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py

16-16: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


56-56: Avoid specifying long messages outside the exception class

(TRY003)

applications/pipeline_visualization/visualizer/styles.py

1-1: Shebang is present but file is not executable

(EXE001)


17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)

applications/pipeline_visualization/visualizer/graph_components.py

1-1: Shebang is present but file is not executable

(EXE001)


17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)

applications/pipeline_visualization/python/pipeline_visualization.py

54-54: Unused method argument: op_input

(ARG002)


54-54: Unused method argument: context

(ARG002)


89-89: Unused method argument: context

(ARG002)


125-125: Unused method argument: op_output

(ARG002)


125-125: Unused method argument: context

(ARG002)

applications/pipeline_visualization/visualizer/visualizer_static.py

17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


183-183: Avoid specifying long messages outside the exception class

(TRY003)


228-228: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/visualizer/visualizer_dynamic.py

17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


178-178: Avoid specifying long messages outside the exception class

(TRY003)


241-241: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/visualizer/nats_async.py

16-16: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


74-74: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


112-112: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


149-149: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


163-163: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


181-181: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🪛 Shellcheck (0.11.0)
applications/pipeline_visualization/start_nats_server.sh

[error] 34-34: Double quote array expansions to avoid re-splitting elements.

(SC2068)


[warning] 36-36: Use single quotes, otherwise this expands now rather than when signalled.

(SC2064)

applications/pipeline_visualization/visualizer/start_visualizer.sh

[warning] 21-21: Declare and assign separately to avoid masking return values.

(SC2155)

🔇 Additional comments (13)
.vscode/tasks.json (1)

518-537: Build task implementation looks solid.

The "Build pipeline_visualization" task follows established conventions and patterns from other build tasks in the file. The inclusion of --language python is appropriate given the dual C++/Python debug configurations defined in launch.json. Task structure, group assignment, and integration are all correct.

applications/pipeline_visualization/cpp/metadata.json (1)

1-38: LGTM!

The metadata structure is well-formed and follows the required schema. The category tag "Visualization" is correctly placed as the first element in the tags array and matches an approved category.

applications/pipeline_visualization/python/pydoc.hpp (1)

1-124: LGTM!

The pydoc header is well-structured with comprehensive documentation for the NatsLogger class. The clang diagnostic about the missing macros.hpp file is expected since this header is part of the Holoscan SDK build infrastructure and will be available during the actual build process.

applications/pipeline_visualization/python/pipeline_visualization.yaml (1)

1-26: LGTM!

The YAML configuration is well-structured with clear, self-documenting parameter names and helpful commented examples for pattern-based filtering.

applications/pipeline_visualization/requirements.txt (1)

19-19: Disregard this review comment.

The review comment is based on incorrect information. Pandas version 2.3.3 does exist—it's listed among the available versions. The version constraint pandas>=2.3.3,<3.0 is valid and will install successfully without any issues. The suggested fix is unnecessary.

Likely an incorrect or invalid review comment.

applications/CMakeLists.txt (1)

95-96: LGTM!

The application registration follows the correct pattern and is properly placed in alphabetical order.

applications/pipeline_visualization/python/metadata.json (1)

1-38: LGTM!

The metadata follows the required schema, and the first tag "Visualization" is from the approved categories list as required by the coding guidelines.

applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1)

17-25: LGTM!

The NATS logger configuration is well-structured with sensible defaults for a visualization example. All logging flags enabled is appropriate for demonstrating the full capabilities of the pipeline visualization feature.

applications/pipeline_visualization/python/CMakeLists.txt (1)

16-21: LGTM!

The pybind11 module setup correctly uses the pybind11_add_holohub_module macro as recommended.

applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

25-69: LGTM!

The tensor conversion logic correctly handles DLDataType to NumPy dtype mapping, including proper endianness and vector type support.

applications/pipeline_visualization/schemas/tensor.fbs (1)

57-64: LGTM!

The Tensor table structure correctly follows the DLPack specification with all necessary fields for tensor representation.

applications/pipeline_visualization/python/nats_logger_pybind.cpp (1)

45-95: LGTM!

The PyNatsLogger constructor design using std::variant to handle both enum and string forms of queue_policy is well-architected, enabling seamless YAML configuration deserialization.

applications/pipeline_visualization/schemas/message.fbs (1)

1-37: LGTM!

The Message schema is well-structured with clear separation between input/output types, proper timestamping, and an extensible Payload union design.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (16)
.vscode/launch.json (1)

1284-1287: Remove trailing commas in presentation objects to fix JSON parse errors.

Both pipeline_visualization debug configurations have trailing commas in their presentation objects, causing JSON parse failures.

Apply this diff to fix both configurations:

         {
             "name": "(gdb) pipeline_visualization/cpp",
             "type": "cppdbg",
             "request": "launch",
             "preLaunchTask": "Build pipeline_visualization",
             "program": "${workspaceFolder}/build/pipeline_visualization/applications/pipeline_visualization/cpp/pipeline_visualization",
             "args": [ ],
             "stopAtEntry": false,
             "cwd": "${workspaceFolder}/applications/pipeline_visualization/",
             "externalConsole": false,
             "MIMode": "gdb",
             "setupCommands": [
                 {
                     "description": "Enable pretty-printing for gdb",
                     "text": "-enable-pretty-printing",
                     "ignoreFailures": true
                 }
             ],
             "presentation": {
-                "group": "pipeline_visualization",
+                "group": "pipeline_visualization"
             }
         },
         {
             "name": "(debugpy) pipeline_visualization/python",
             "type": "debugpy",
             "request": "launch",
             "preLaunchTask": "Build pipeline_visualization",
             "program": "${workspaceFolder}/applications/pipeline_visualization/python/pipeline_visualization.py",
             "cwd": "${workspaceFolder}/build/pipeline_visualization/",
             "env": {
                 "PYTHONPATH": "${workspaceFolder}/build/pipeline_visualization/python/lib:${workspaceFolder}:${env:PYTHONPATH}"
             },
             "presentation": {
-                "group": "pipeline_visualization",
+                "group": "pipeline_visualization"
             }
         }

Also applies to: 1298-1301

applications/pipeline_visualization/start_nats_server.sh (2)

34-34: Quote the array expansion to prevent word splitting.

The $@ parameter expansion should be quoted to preserve arguments that contain spaces. This was flagged in a previous review and remains unaddressed.

Apply this fix:

-docker run --network host -ti -v $tmp_dir/nats-server.conf:/nats-server.conf nats:latest $@
+docker run --network host -ti -v $tmp_dir/nats-server.conf:/nats-server.conf nats:latest "$@"

36-36: Use single quotes in trap to defer variable expansion.

The trap command with double quotes causes $tmp_dir to expand immediately rather than when the EXIT signal is received. Use single quotes to defer expansion until the trap executes. This was flagged in a previous review and remains unaddressed.

Apply this fix:

-trap "rm -rf $tmp_dir" EXIT
+trap 'rm -rf "$tmp_dir"' EXIT
applications/pipeline_visualization/visualizer/styles.py (1)

1-17: Fix lint failures at the module header.

The shebang on line 1 is unnecessary for an imported module, and the # noqa: E501 directive on line 17 is unused.

applications/pipeline_visualization/cpp/create_tensor.hpp (1)

21-21: Fix the FlatBuffers include path.

The include path doesn't match the generated header location and will cause compilation errors.

applications/pipeline_visualization/requirements.txt (1)

4-4: Update numpy version constraint.

The current constraint is incompatible with pandas 2.3.3, which requires numpy >= 1.20.3.

applications/pipeline_visualization/python/CMakeLists.txt (1)

23-25: Remove redundant explicit linking.

The pybind11_add_holohub_module macro already handles linking internally. Based on learnings.

applications/pipeline_visualization/README.md (1)

124-124: Correct the path in the command.

The path should reference applications/pipeline_visualization/visualizer not examples/pipeline_visualization/python.

applications/pipeline_visualization/python/nats_logger_pybind.cpp (1)

147-147: Set the binding default to 200000 ns.

The Python binding still injects 50000, which overrides the C++ default of 200000 whenever users rely on the defaults. This was flagged earlier and remains unresolved—please align the binding with the constructor.

-           "large_data_worker_sleep_time"_a = 50000,
+           "large_data_worker_sleep_time"_a = 200000,
applications/pipeline_visualization/visualizer/visualizer_static.py (2)

17-17: Drop the unused noqa directive.

Ruff flags this as RUF100; there’s nothing left to suppress, so the directive now breaks linting. Please remove it.

-"""  # noqa: E501
+"""

189-194: Guard against unknown stream IDs.

list.index() throws ValueError as soon as an unexpected unique_id appears, so the callback crashes before it can log and skip. Please catch the lookup failure instead of letting it bubble up.

-                index = self._unique_ids.index(unique_id)
-                if index is None:
-                    logger.warning(f"Unknown unique_id: {unique_id}")
-                    continue
+                try:
+                    index = self._unique_ids.index(unique_id)
+                except ValueError:
+                    logger.warning(f"Unknown unique_id: {unique_id}")
+                    continue
applications/pipeline_visualization/visualizer/nats_async.py (1)

149-166: Fix uninitialized variable usage after exception.

If the subscribe operation raises a TypeError at lines 154-161, the code continues to line 165 and attempts to use nats_subscription, which was never initialized. This will cause a NameError.

Apply this fix:

     async def _subscribe(self, subject: str, q: Queue | None = None):
         if q is None:
             q = Queue()
 
+        nats_subscription = None
         try:
             if self._use_js:
                 nats_subscription = await self._js_context.subscribe(
                     subject, cb=self._async_sub_handler
                 )
             else:
                 nats_subscription = await self._connection.subscribe(
                     subject, cb=self._async_sub_handler
                 )
         except TypeError:
             logger.error(f"Cannot subscribe to {subject} since stream doesn't exist")
+            return
 
         self._subscriptions[subject] = _Subscription(nats_subscription=nats_subscription, queue=q)
applications/pipeline_visualization/cpp/nats_logger.cpp (4)

165-166: Handle optional reply subject safely.

natsMsg_GetReply returns nullptr when the publisher omits a reply subject. Constructing std::string reply = nullptr is undefined behavior and will crash.

Apply this diff:

-    std::string reply = natsMsg_GetReply(msg);
+    const char* reply_cstr = natsMsg_GetReply(msg);
+    std::string reply = reply_cstr ? reply_cstr : "";

253-258: Avoid rate-limit division by zero.

If publish_rate is configured as 0 Hz (or a negative value), 1'000'000'000 / publish_rate yields inf/NaN, so subsequent messages are never logged. Treat non-positive rates as "no throttling" (or reject them) before computing the interval.

One possible fix:

-    if (timestamp_ns - last_publish_time_[unique_id] >= 1000000000 / logger->publish_rate_.get()) {
+    const float rate_hz = logger->publish_rate_.get();
+    if (rate_hz <= 0.0F) {
+      last_publish_time_[unique_id] = timestamp_ns;
+      return true;
+    }
+    const int64_t period_ns = static_cast<int64_t>(1'000'000'000.0F / rate_hz);
+    if (timestamp_ns - last_publish_time_[unique_id] >= period_ns) {

314-325: Preserve CUDA stream information when delegating.

The overrides drop the stream argument when calling the base AsyncDataLoggerResource helpers, so GPU buffers will be synchronized on the default stream instead of the producer's stream.

Forward stream in all three overloads:

   return AsyncDataLoggerResource::log_data(
-      data, unique_id, acquisition_timestamp, metadata, io_type);
+      data, unique_id, acquisition_timestamp, metadata, io_type, stream);
   return AsyncDataLoggerResource::log_tensor_data(
-      tensor, unique_id, acquisition_timestamp, metadata, io_type);
+      tensor, unique_id, acquisition_timestamp, metadata, io_type, stream);
   return AsyncDataLoggerResource::log_tensormap_data(
-      tensor_map, unique_id, acquisition_timestamp, metadata, io_type);
+      tensor_map, unique_id, acquisition_timestamp, metadata, io_type, stream);

Also applies to: 327-339, 341-353


520-541: Propagate publish failures for TensorMap entries.

When the NATS connection is unavailable, publish_data returns false, yet this branch still reports success. Propagate the failure so callers can react consistently (the TensorData branch already does this at lines 516-518).

Proposed change:

-      nats_logger_->impl_->publish_data(nats_logger_->subject_prefix_.get() + ".data",
-                                        builder.GetBufferPointer(),
-                                        builder.GetSize());
+      const bool ok = nats_logger_->impl_->publish_data(
+          nats_logger_->subject_prefix_.get() + ".data",
+          builder.GetBufferPointer(),
+          builder.GetSize());
+      if (!ok) {
+        return false;
+      }
     }
     return true;
🧹 Nitpick comments (6)
applications/pipeline_visualization/README.md (1)

5-5: Consider addressing markdown lint issues.

Several minor markdown style issues could be cleaned up:

  • Line 5: Add alt text to the image
  • Lines 128, 158: Wrap bare URLs in angle brackets or markdown links
  • Lines 140, 195: Add language identifiers to fenced code blocks

These are cosmetic improvements for better documentation quality.

Also applies to: 128-128, 140-140, 158-158, 195-195

applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

16-16: Remove unused noqa directive.

The # noqa: E501 directive is unused and can be removed.

Apply this diff:

-"""  # noqa: E501
+"""
applications/pipeline_visualization/visualizer/nats_async.py (1)

74-74: Use explicit union syntax for optional parameters.

PEP 484 prohibits implicit Optional. The parameter should explicitly indicate it can be None.

Apply this diff:

-    def __init__(self, host: str, subscriptions: list[str] = None):
+    def __init__(self, host: str, subscriptions: list[str] | None = None):
applications/pipeline_visualization/python/pipeline_visualization.py (2)

92-92: Consider consistent tensor key naming.

The input tensor is retrieved with key "wave" (line 92), but the output is emitted with key "modulated_signal" (line 111). While this works because SinkOp accepts any key, using consistent naming (e.g., always "wave") would improve clarity and maintainability.

Also applies to: 111-111


128-128: Handle empty dict edge case.

Using next(iter(op_input.receive("in").values())) will raise StopIteration if the received dict is empty. While this shouldn't happen in normal operation, adding a check or using .get() with a default would make the code more robust.

For example:

received = op_input.receive("in")
if received:
    _tensor = next(iter(received.values()))
    # process tensor
applications/pipeline_visualization/cpp/nats_logger.cpp (1)

217-219: Reconsider flush after every publish.

Line 219 calls natsConnection_Flush after every publish, which forces a round-trip with the server. This contradicts the optimization at line 141 where natsOptions_SetSendAsap(true) is set specifically to "reduce latency" by sending immediately without requiring flush. Consider removing the flush call for better performance, or document why it's needed despite SetSendAsap.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bbc6ddf and f949acc.

⛔ Files ignored due to path filters (1)
  • applications/pipeline_visualization/pipeline_visualization.png is excluded by !**/*.png
📒 Files selected for processing (30)
  • .vscode/launch.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • applications/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.py (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/pydoc.hpp (1 hunks)
  • applications/pipeline_visualization/requirements.txt (1 hunks)
  • applications/pipeline_visualization/schemas/message.fbs (1 hunks)
  • applications/pipeline_visualization/schemas/tensor.fbs (1 hunks)
  • applications/pipeline_visualization/start_nats_server.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/graph_components.py (1 hunks)
  • applications/pipeline_visualization/visualizer/nats_async.py (1 hunks)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/styles.py (1 hunks)
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • applications/pipeline_visualization/cpp/metadata.json
  • applications/pipeline_visualization/schemas/message.fbs
  • applications/pipeline_visualization/cpp/create_tensor.cpp
  • applications/pipeline_visualization/schemas/tensor.fbs
  • applications/pipeline_visualization/python/metadata.json
  • .vscode/tasks.json
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml
  • applications/pipeline_visualization/python/pipeline_visualization.yaml
  • applications/CMakeLists.txt
🧰 Additional context used
🧠 Learnings (14)
📓 Common learnings
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to applications/CMakeLists.txt : Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to operators/CMakeLists.txt : Register operators in operators/CMakeLists.txt using add_holohub_operator(...) and optional DEPENDS EXTENSIONS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to workflow/CMakeLists.txt : Register workflows in workflow/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/operators/*/*.{py,cpp} : Main operator filename should match the directory name with the appropriate extension

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Each application must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/workflows/*/CMakeLists.txt : Each workflow must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/pkg/*/CMakeLists.txt : Each package must provide a CMakeLists.txt for package configuration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to pkg/CMakeLists.txt : Register packages in pkg/CMakeLists.txt using add_holohub_package(...)

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/gxf_extensions/*/CMakeLists.txt : Each GXF extension must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Applications should include a testing section in CMakeLists.txt using CTest for functional tests

Applied to files:

  • applications/pipeline_visualization/cpp/CMakeLists.txt
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
  • applications/pipeline_visualization/python/pipeline_visualization.py
  • applications/pipeline_visualization/README.md
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/README.md : Include performance insights, considerations, and benchmarking results in application README when relevant

Applied to files:

  • applications/pipeline_visualization/README.md
🧬 Code graph analysis (12)
applications/pipeline_visualization/python/nats_logger_pybind.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.cpp (6)
  • initialize (304-312)
  • initialize (304-304)
  • initialize (481-483)
  • initialize (481-481)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/nats_logger.hpp (4)
  • spec (54-54)
  • data (72-76)
  • data (123-127)
  • tensor (89-93)
applications/pipeline_visualization/visualizer/graph_components.py (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (2)
  • data (72-76)
  • data (123-127)
applications/pipeline_visualization/visualizer/visualizer_static.py (4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-181)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (3)
  • update_subject (102-124)
  • update_source (134-228)
  • run (230-244)
applications/pipeline_visualization/start_nats_server.sh (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (217-231)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (3)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-181)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (9)
  • data (72-76)
  • data (123-127)
  • spec (54-54)
  • tensor (89-93)
  • tensor_map (106-110)
  • AsyncNatsBackend (145-145)
  • AsyncNatsBackend (146-146)
  • entry (168-168)
  • entry (178-178)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
  • CreateTensor (25-111)
  • CreateTensor (25-27)
applications/pipeline_visualization/python/pipeline_visualization.py (2)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (15)
  • SourceOp (52-52)
  • spec (64-64)
  • spec (64-64)
  • spec (135-138)
  • spec (135-135)
  • spec (198-198)
  • spec (198-198)
  • op_input (140-181)
  • op_input (140-141)
  • op_input (200-206)
  • op_input (200-201)
  • ModulateOp (123-123)
  • TimeSeriesApp (230-235)
  • main (278-371)
  • main (278-278)
applications/pipeline_visualization/cpp/nats_logger.cpp (4)
  • setup (281-296)
  • setup (281-281)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/nats_async.py (3)
applications/ehr_query_llm/lmm/riva_asr.py (1)
  • put (253-254)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (217-231)
applications/pipeline_visualization/python/pydoc.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
operators/grpc_operators/python/common/tensor_proto.py (1)
  • DLDataTypeCode (46-51)
🪛 Biome (2.1.2)
.vscode/launch.json

[error] 1286-1286: Expected a property but instead found '}'.

Expected a property here.

(parse)


[error] 1300-1300: Expected a property but instead found '}'.

Expected a property here.

(parse)

🪛 Clang (14.0.6)
applications/pipeline_visualization/python/nats_logger_pybind.cpp

[error] 18-18: 'pybind11/pybind11.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/pipeline_visualization.cpp

[error] 32-32: 'getopt.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/nats_logger.hpp

[error] 21-21: 'holoscan/core/component_spec.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/create_tensor.hpp

[error] 21-21: 'flatbuffers/tensor_generated.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/pydoc.hpp

[error] 21-21: 'macros.hpp' file not found

(clang-diagnostic-error)

🪛 markdownlint-cli2 (0.18.1)
applications/pipeline_visualization/README.md

5-5: Images should have alternate text (alt text)

(MD045, no-alt-text)


128-128: Bare URL used

(MD034, no-bare-urls)


140-140: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


158-158: Bare URL used

(MD034, no-bare-urls)


195-195: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.4)
applications/pipeline_visualization/visualizer/graph_components.py

1-1: Shebang is present but file is not executable

(EXE001)


17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)

applications/pipeline_visualization/visualizer/visualizer_static.py

17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


183-183: Avoid specifying long messages outside the exception class

(TRY003)


228-228: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/visualizer/visualizer_dynamic.py

17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


178-178: Avoid specifying long messages outside the exception class

(TRY003)


241-241: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/python/pipeline_visualization.py

54-54: Unused method argument: op_input

(ARG002)


54-54: Unused method argument: context

(ARG002)


89-89: Unused method argument: context

(ARG002)


125-125: Unused method argument: op_output

(ARG002)


125-125: Unused method argument: context

(ARG002)

applications/pipeline_visualization/visualizer/nats_async.py

16-16: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


74-74: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


112-112: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


149-149: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


163-163: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


181-181: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

applications/pipeline_visualization/visualizer/styles.py

1-1: Shebang is present but file is not executable

(EXE001)


17-17: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)

applications/pipeline_visualization/visualizer/tensor_to_numpy.py

16-16: Unused noqa directive (non-enabled: E501)

Remove unused noqa directive

(RUF100)


56-56: Avoid specifying long messages outside the exception class

(TRY003)

🪛 Shellcheck (0.11.0)
applications/pipeline_visualization/start_nats_server.sh

[error] 34-34: Double quote array expansions to avoid re-splitting elements.

(SC2068)


[warning] 36-36: Use single quotes, otherwise this expands now rather than when signalled.

(SC2064)

applications/pipeline_visualization/visualizer/start_visualizer.sh

[warning] 21-21: Declare and assign separately to avoid masking return values.

(SC2155)

🔇 Additional comments (9)
applications/pipeline_visualization/start_nats_server.sh (2)

1-18: LGTM!

Standard license header and solid error handling setup with set -eu to catch errors and undefined variables early.


20-31: LGTM!

Temp directory creation and NATS config generation via heredoc are implemented correctly. Config extension with port, monitor_port, and 8MB max_payload is appropriate.

applications/pipeline_visualization/cpp/nats_logger.hpp (1)

35-190: LGTM! Well-structured async data logger.

The NatsLogger class is well-designed with comprehensive documentation, proper use of the PIMPL pattern for implementation hiding, and a cleanly encapsulated AsyncNatsBackend. The API surface is clear and all overrides are properly marked.

applications/pipeline_visualization/visualizer/start_visualizer.sh (1)

17-22: LGTM! Clean launcher script.

The script properly sets up the Python environment and provides flexible visualizer selection with a sensible default.

applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

25-69: LGTM! Clean tensor conversion with proper DLPack mapping.

The function correctly maps DLDataType components to NumPy dtypes and handles both scalar and vector types. The error handling for unsupported types is appropriate for the current use case (int, uint, float types).

applications/pipeline_visualization/python/pipeline_visualization.py (2)

166-189: LGTM!

The pipeline composition is correct with proper operator sequencing (source → modulate → sink) and conditional NATS logger integration. The parameter forwarding via self.kwargs("nats_logger") allows YAML configuration to override defaults.


237-260: LGTM!

The CLI argument parsing and application initialization logic is well-structured. The config file loading properly checks for existence before attempting to load, and the default path resolution using Path(__file__).parent is appropriate.

applications/pipeline_visualization/cpp/nats_logger.cpp (2)

48-60: LGTM!

The NATS_CALL macro provides centralized error handling for NATS API calls with detailed error messages including statement, line, file, and error description. The use of statement expressions is appropriate for this pattern.


72-89: LGTM!

The custom Deleter template and type aliases provide proper RAII semantics for NATS resources. This ensures automatic cleanup and prevents resource leaks.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (11)
.vscode/launch.json (2)

1284-1287: Remove trailing comma in presentation object.

The presentation object contains a trailing comma after "group": "pipeline_visualization", which causes a JSON parse error. This is the same issue flagged in the previous review.

Apply this diff to fix the syntax error:

             "presentation": {
-                "group": "pipeline_visualization",
+                "group": "pipeline_visualization"
             }

1298-1301: Remove trailing comma in presentation object.

The presentation object contains a trailing comma after "group": "pipeline_visualization", which causes a JSON parse error. This is the same issue flagged in the previous review.

Apply this diff to fix the syntax error:

             "presentation": {
-                "group": "pipeline_visualization",
+                "group": "pipeline_visualization"
             }
applications/pipeline_visualization/start_nats_server.sh (2)

34-34: Quote the array expansion to prevent word splitting.

The $@ parameter expansion should be quoted to prevent re-splitting of arguments containing spaces.

Apply this fix:

-docker run --network host -ti -v $tmp_dir/nats-server.conf:/nats-server.conf nats:latest $@
+docker run --network host -ti -v $tmp_dir/nats-server.conf:/nats-server.conf nats:latest "$@"

36-36: Use single quotes in trap to defer variable expansion.

The trap command with double quotes causes $tmp_dir to expand immediately rather than when the EXIT signal is received. Use single quotes to defer expansion until the trap executes.

Apply this fix:

-trap 'rm -rf $tmp_dir' EXIT
+trap 'rm -rf "$tmp_dir"' EXIT
applications/pipeline_visualization/cpp/create_tensor.cpp (1)

78-87: Critical: cudaMemcpyAsync with non-pinned memory will fail.

The destination pointer from CreateUninitializedVector (line 72) is pageable host memory. Passing it to cudaMemcpyAsync requires it to be pinned, causing cudaErrorInvalidValue when a stream is provided.

Apply this diff to use synchronous copy:

-        if (stream) {
-          HOLOSCAN_CUDA_CALL_THROW_ERROR(
-              cudaMemcpyAsync(dst, tensor->data(), total_bytes, copy_type, *stream),
-              "Failed to copy GPU data to host for tensor materialization");
-        } else {
-          HOLOSCAN_CUDA_CALL_THROW_ERROR(
-              cudaMemcpy(dst, tensor->data(), total_bytes, copy_type),
-              "Failed to copy GPU data to host for tensor materialization");
-        }
+        HOLOSCAN_CUDA_CALL_THROW_ERROR(
+            cudaMemcpy(dst, tensor->data(), total_bytes, copy_type),
+            "Failed to copy GPU data to host for tensor materialization");
applications/pipeline_visualization/cpp/create_tensor.hpp (1)

21-21: Critical: Fix the FlatBuffers include path.

The include path <flatbuffers/tensor_generated.h> doesn't match the generated location, causing compilation failure. Use the correct generated path.

Apply this diff:

-#include <flatbuffers/tensor_generated.h>
+#include "pipeline_visualization/flatbuffers/tensor_generated.h"
applications/pipeline_visualization/python/nats_logger_pybind.cpp (1)

147-147: Major: Fix default value mismatch.

The binding default for large_data_worker_sleep_time is 50000, but the PyNatsLogger constructor default (line 58) is 200000. This inconsistency will confuse users.

Apply this diff:

            "large_data_max_queue_size"_a = 1000,
-           "large_data_worker_sleep_time"_a = 50000,
+           "large_data_worker_sleep_time"_a = 200000,
            "large_data_queue_policy"_a = AsyncQueuePolicy::kReject,
applications/pipeline_visualization/cpp/nats_logger.cpp (4)

321-332: Forward the CUDA stream parameter to the base class.

The stream parameter is received but not passed to AsyncDataLoggerResource::log_data, causing GPU buffers to be synchronized on the default stream instead of the producer's stream.

Apply this diff:

   return AsyncDataLoggerResource::log_data(
-      data, unique_id, acquisition_timestamp, metadata, io_type);
+      data, unique_id, acquisition_timestamp, metadata, io_type, stream);

334-346: Forward the CUDA stream parameter to the base class.

The stream parameter is received but not passed to AsyncDataLoggerResource::log_tensor_data.

Apply this diff:

   return AsyncDataLoggerResource::log_tensor_data(
-      tensor, unique_id, acquisition_timestamp, metadata, io_type);
+      tensor, unique_id, acquisition_timestamp, metadata, io_type, stream);

348-360: Forward the CUDA stream parameter to the base class.

The stream parameter is received but not passed to AsyncDataLoggerResource::log_tensormap_data.

Apply this diff:

   return AsyncDataLoggerResource::log_tensormap_data(
-      tensor_map, unique_id, acquisition_timestamp, metadata, io_type);
+      tensor_map, unique_id, acquisition_timestamp, metadata, io_type, stream);

526-549: Propagate publish failures for TensorMap entries.

The loop publishes each tensor but ignores the return value of publish_data. If any publish fails (e.g., NATS unavailable), the function still returns true, hiding the failure from callers.

Apply this diff:

     for (const auto& [key, tensor] : tensor_map) {
       ::flatbuffers::FlatBufferBuilder builder;
       auto offset = pipeline_visualization::flatbuffers::CreateMessage(
           builder,
           builder.CreateString(entry.unique_id + "." + key),
           static_cast<pipeline_visualization::flatbuffers::IOType>(entry.io_type),
           entry.acquisition_timestamp,
           entry.emit_timestamp,
           nats_logger_->should_log_tensor_data_content()
               ? pipeline_visualization::flatbuffers::Payload::
                     Payload_pipeline_visualization_flatbuffers_Tensor
               : pipeline_visualization::flatbuffers::Payload::Payload_NONE,
           nats_logger_->should_log_tensor_data_content()
               ? pipeline_visualization::flatbuffers::CreateTensor(builder, tensor, entry.stream)
                     .Union()
               : 0);
       builder.Finish(offset);
-      nats_logger_->impl_->publish_data(nats_logger_->subject_prefix_.get() + ".data",
-                                        builder.GetBufferPointer(),
-                                        builder.GetSize());
+      const bool ok = nats_logger_->impl_->publish_data(
+          nats_logger_->subject_prefix_.get() + ".data",
+          builder.GetBufferPointer(),
+          builder.GetSize());
+      if (!ok) {
+        return false;
+      }
     }
     return true;
🧹 Nitpick comments (5)
applications/pipeline_visualization/README.md (2)

128-128: Consider using markdown link syntax for URLs.

The bare URLs at lines 128 and 158 trigger markdown linting warnings. While the current format is readable, you could wrap them in angle brackets or use proper markdown link syntax for lint compliance:

The web interface will be available at: **<http://localhost:8050>**

or

1. Open your web browser to [http://localhost:8050](http://localhost:8050)

Also applies to: 158-158


195-203: Add a language identifier to the fenced code block.

The code block showing the Message structure lacks a language identifier, which triggers a markdown linting warning. Consider adding an appropriate identifier:

-```
+```text
 Message {
   unique_id: string          // Format: "operator_name.port_name"
   ...
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)

174-178: Consider extracting the exception message.

The ValueError uses an inline f-string message. For better maintainability, consider defining a custom exception class or extracting the message template to a module-level constant.

applications/pipeline_visualization/visualizer/graph_components.py (1)

1-1: Shebang without executable permission.

The file has a shebang but isn't executable. Either mark it executable (chmod +x) if it's meant to be run directly, or remove the shebang if it's only imported as a module.

applications/pipeline_visualization/cpp/nats_logger.cpp (1)

210-223: Consider consistent error handling strategy.

publish_data returns false for connection unavailability but throws (via NATS_CALL) for other NATS errors. While this may be intentional (critical vs transient errors), it requires callers to handle both return values and exceptions.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f949acc and 5a70316.

⛔ Files ignored due to path filters (1)
  • applications/pipeline_visualization/pipeline_visualization.png is excluded by !**/*.png
📒 Files selected for processing (30)
  • .vscode/launch.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • applications/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.py (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/pydoc.hpp (1 hunks)
  • applications/pipeline_visualization/requirements.txt (1 hunks)
  • applications/pipeline_visualization/schemas/message.fbs (1 hunks)
  • applications/pipeline_visualization/schemas/tensor.fbs (1 hunks)
  • applications/pipeline_visualization/start_nats_server.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/graph_components.py (1 hunks)
  • applications/pipeline_visualization/visualizer/nats_async.py (1 hunks)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/styles.py (1 hunks)
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • applications/CMakeLists.txt
  • applications/pipeline_visualization/visualizer/start_visualizer.sh
  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/python/pipeline_visualization.yaml
  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/visualizer/styles.py
  • .vscode/tasks.json
  • applications/pipeline_visualization/requirements.txt
  • applications/pipeline_visualization/cpp/CMakeLists.txt
🧰 Additional context used
📓 Path-based instructions (1)
**/metadata.json

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/metadata.json: In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category
The category (first tag) in metadata.json must be one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Files:

  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/cpp/metadata.json
🧠 Learnings (8)
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/README.md
  • applications/pipeline_visualization/python/pipeline_visualization.py
  • applications/pipeline_visualization/visualizer/visualizer_static.py
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/metadata.json : Each application must include a metadata.json that follows the application schema

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-20T17:54:32.001Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-20T17:54:32.001Z
Learning: Applies to **/metadata.json : In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
📚 Learning: 2025-10-30T19:04:41.239Z
Learnt from: bhashemian
Repo: nvidia-holoscan/holohub PR: 1203
File: applications/video_streaming/video_streaming_server/cpp/metadata.json:42-51
Timestamp: 2025-10-30T19:04:41.239Z
Learning: In the video_streaming application structure, umbrella metadata (applications/video_streaming/metadata.json) uses language-qualified mode names like "server_cpp" and "server_python" to distinguish between implementations, while component-specific metadata files (e.g., applications/video_streaming/video_streaming_server/cpp/metadata.json) use simple mode names like "server" since they are already scoped to a specific language implementation by their directory structure.

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-20T17:54:32.001Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-20T17:54:32.001Z
Learning: Applies to **/metadata.json : The category (first tag) in metadata.json must be one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/README.md : Include performance insights, considerations, and benchmarking results in application README when relevant

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/README.md : Each application must include a README.md describing purpose and architecture

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to applications/CMakeLists.txt : Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
🧬 Code graph analysis (12)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (2)
operators/grpc_operators/python/common/tensor_proto.py (1)
  • DLDataTypeCode (46-51)
operators/medical_imaging/core/io_type.py (1)
  • IOType (19-25)
applications/pipeline_visualization/visualizer/nats_async.py (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (218-232)
applications/pipeline_visualization/start_nats_server.sh (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (218-232)
applications/pipeline_visualization/cpp/create_tensor.cpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (3)
  • tensor (89-93)
  • data (72-76)
  • data (123-127)
applications/pipeline_visualization/python/pydoc.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.cpp (6)
  • initialize (311-319)
  • initialize (311-311)
  • initialize (488-490)
  • initialize (488-488)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/nats_logger.hpp (4)
  • spec (54-54)
  • data (72-76)
  • data (123-127)
  • tensor (89-93)
applications/pipeline_visualization/cpp/nats_logger.cpp (3)
applications/pipeline_visualization/cpp/nats_logger.hpp (9)
  • data (72-76)
  • data (123-127)
  • spec (54-54)
  • tensor (89-93)
  • tensor_map (106-110)
  • AsyncNatsBackend (145-145)
  • AsyncNatsBackend (146-146)
  • entry (168-168)
  • entry (178-178)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
  • CreateTensor (25-111)
  • CreateTensor (25-27)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (55-57)
applications/pipeline_visualization/python/pipeline_visualization.py (4)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (15)
  • spec (64-64)
  • spec (64-64)
  • spec (133-136)
  • spec (133-133)
  • spec (196-196)
  • spec (196-196)
  • op_input (138-179)
  • op_input (138-139)
  • op_input (198-204)
  • op_input (198-199)
  • wave (73-73)
  • ModulateOp (121-121)
  • TimeSeriesApp (228-233)
  • main (276-374)
  • main (276-276)
applications/pipeline_visualization/cpp/nats_logger.cpp (4)
  • setup (288-303)
  • setup (288-288)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (218-232)
applications/pipeline_visualization/visualizer/visualizer_static.py (4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-188)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (4)
  • Visualizer (34-244)
  • update_subject (102-124)
  • update_source (134-228)
  • run (230-244)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-188)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_static.py (3)
  • Visualizer (34-232)
  • update_subject (107-129)
  • run (218-232)
applications/pipeline_visualization/python/nats_logger_pybind.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
🪛 Biome (2.1.2)
.vscode/launch.json

[error] 1286-1286: Expected a property but instead found '}'.

Expected a property here.

(parse)


[error] 1300-1300: Expected a property but instead found '}'.

Expected a property here.

(parse)

🪛 Clang (14.0.6)
applications/pipeline_visualization/python/pydoc.hpp

[error] 21-21: 'macros.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/nats_logger.hpp

[error] 21-21: 'holoscan/core/component_spec.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/pipeline_visualization.cpp

[error] 32-32: 'getopt.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/nats_logger_pybind.cpp

[error] 18-18: 'pybind11/pybind11.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/create_tensor.hpp

[error] 21-21: 'flatbuffers/tensor_generated.h' file not found

(clang-diagnostic-error)

🪛 markdownlint-cli2 (0.18.1)
applications/pipeline_visualization/README.md

128-128: Bare URL used

(MD034, no-bare-urls)


158-158: Bare URL used

(MD034, no-bare-urls)


195-195: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py

56-56: Avoid specifying long messages outside the exception class

(TRY003)

applications/pipeline_visualization/visualizer/nats_async.py

112-112: Redundant exception object included in logging.exception call

(TRY401)


182-182: Consider moving this statement to an else block

(TRY300)


187-187: Redundant exception object included in logging.exception call

(TRY401)

applications/pipeline_visualization/visualizer/graph_components.py

1-1: Shebang is present but file is not executable

(EXE001)

applications/pipeline_visualization/python/pipeline_visualization.py

54-54: Unused method argument: op_input

(ARG002)


54-54: Unused method argument: context

(ARG002)


89-89: Unused method argument: context

(ARG002)


125-125: Unused method argument: op_output

(ARG002)


125-125: Unused method argument: context

(ARG002)

applications/pipeline_visualization/visualizer/visualizer_static.py

183-183: Avoid specifying long messages outside the exception class

(TRY003)


229-229: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/visualizer/visualizer_dynamic.py

178-178: Avoid specifying long messages outside the exception class

(TRY003)


241-241: Possible binding to all interfaces

(S104)

🪛 Shellcheck (0.11.0)
applications/pipeline_visualization/start_nats_server.sh

[error] 34-34: Double quote array expansions to avoid re-splitting elements.

(SC2068)

🔇 Additional comments (19)
applications/pipeline_visualization/README.md (2)

1-41: LGTM!

The overview and architecture sections clearly explain the three main components and their interactions. The Mermaid diagram effectively visualizes the data flow from the Holoscan application through NATS to the web visualizer.


124-126: LGTM!

The path correctly points to applications/pipeline_visualization/visualizer, which aligns with the project structure.

applications/pipeline_visualization/cpp/metadata.json (1)

1-38: LGTM!

The metadata.json properly follows the application schema with "Visualization" as the category (first tag), which is in the approved categories list. The structure is consistent with other applications in the repository.

As per coding guidelines

applications/pipeline_visualization/python/metadata.json (1)

1-38: LGTM!

The metadata.json correctly follows the application schema with "Visualization" as the category (first tag), which is in the approved categories list. The workdir difference from the C++ version (holohub_bin vs holohub_app_bin) is appropriate since Python runs from source while C++ runs from the compiled binary.

As per coding guidelines

applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

25-69: LGTM!

The tensor conversion logic correctly maps DLDataTypeCode to NumPy dtype, handles both scalar and vector types (via lanes), and constructs the appropriate little-endian dtype string. The implementation is clear and well-documented.

Note that the function intentionally raises an error for unsupported types (complex, opaque handles), which is appropriate for this visualization use case.

applications/pipeline_visualization/python/pydoc.hpp (1)

27-120: LGTM!

The Python documentation for NatsLogger is comprehensive and well-structured. It clearly explains:

  • All constructor parameters and their defaults
  • Behavior of the two queue systems (regular and large data)
  • Filtering logic for allowlist/denylist patterns
  • Unique ID format in different contexts

The documentation will be very helpful for users working with the Python bindings.

Note: The static analysis error about missing 'macros.hpp' is a false positive - this header is provided by the Holoscan SDK build system.

applications/pipeline_visualization/schemas/message.fbs (1)

18-37: LGTM!

The FlatBuffers schema is well-designed with:

  • Clear IOType enum for input/output distinction
  • Extensible Payload union for future payload types
  • Appropriate timestamp fields using int64 for nanosecond precision
  • Proper root_type declaration
applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1)

1-25: LGTM!

The YAML configuration provides sensible defaults for the Pipeline Visualization example with all logging features enabled. The commented-out allowlist/denylist patterns serve as helpful examples for users who want to customize filtering.

applications/pipeline_visualization/schemas/tensor.fbs (1)

20-64: LGTM! Schema design is clean and appropriate for the use case.

The FlatBuffers schema correctly models DLPack tensor structures with a practical subset of data types and device types suitable for visualization purposes.

applications/pipeline_visualization/cpp/create_tensor.cpp (2)

25-56: LGTM! Metadata extraction and serialization logic is correct.

The function properly validates the DLPack context, extracts tensor metadata (shape, dtype, device, strides), and handles edge cases like missing strides.


90-105: LGTM! CPU/Host memory handling is correct.

The direct memcpy for CPU/CUDAHost memory is appropriate, and throwing for unsupported device types provides clear error signaling.

applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)

98-124: LGTM! Connection management logic is sound.

The callback correctly handles subscription/unsubscription and manages UI state transitions between connected and disconnected modes.

applications/pipeline_visualization/visualizer/graph_components.py (1)

28-87: LGTM! Graph creation logic correctly handles static and dynamic modes.

The function properly branches on whether a message is provided, extracts appropriate metadata, and creates the correct figure and component ID formats.

applications/pipeline_visualization/cpp/create_tensor.hpp (1)

31-57: LGTM! Comprehensive documentation and clear API.

The function documentation thoroughly describes behavior, parameters, return values, exceptions, and important notes about data materialization.

applications/pipeline_visualization/cpp/nats_logger.cpp (5)

166-167: Previously reported issue has been correctly addressed.

The nullable return from natsMsg_GetReply is now properly guarded before constructing the std::string.


256-261: Previously reported issue has been correctly addressed.

The rate-limit logic now checks for non-positive rates before division, treating them as "no throttling."


485-486: LGTM!

The AsyncNatsBackend constructor correctly initializes with a pointer to the parent NatsLogger.


488-492: LGTM!

The initialize and shutdown methods provide clean lifecycle hooks for the backend.


557-559: LGTM!

Delegating process_large_data_entry to process_data_entry is appropriate for this implementation.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (5)
applications/pipeline_visualization/requirements.txt (1)

4-4: Fix numpy version constraint to be compatible with pandas 2.3.3.

This issue was flagged in a previous review and remains unresolved. The constraint numpy>=1.0.4,<3.0 is incompatible with pandas 2.3.3, which requires numpy >= 1.20.3 and must pin numpy < 2.0 until pandas is rebuilt for NumPy 2.0 compatibility. The overly permissive upper bound allows numpy 2.x, which will cause runtime conflicts.

Apply this diff to fix the constraint:

-numpy>=1.0.4,<3.0
+numpy>=1.20.3,<2.0
applications/pipeline_visualization/start_nats_server.sh (1)

34-34: Quote $@ to preserve argument boundaries.

The array parameter expansion $@ should be quoted to prevent re-splitting of arguments containing spaces. This was flagged in a previous review.

Apply this fix:

-docker run --network host -ti -v $tmp_dir/nats-server.conf:/nats-server.conf nats:latest $@
+docker run --network host -ti -v $tmp_dir/nats-server.conf:/nats-server.conf nats:latest "$@"
applications/pipeline_visualization/cpp/create_tensor.cpp (1)

78-86: Async copy into pageable FlatBuffer memory breaks GPU serialization.

fbb.CreateUninitializedVector hands back regular pageable host memory. Issuing cudaMemcpyAsync into that buffer (when a stream is provided) violates CUDA’s requirements for async transfers and reliably raises cudaErrorInvalidValue, so GPU tensors never materialize. Please fall back to a synchronous cudaMemcpy (or explicitly allocate/register pinned host memory) before writing into the FlatBuffer payload.

-        if (stream) {
-          HOLOSCAN_CUDA_CALL_THROW_ERROR(
-              cudaMemcpyAsync(dst, tensor->data(), total_bytes, copy_type, *stream),
-              "Failed to copy GPU data to host for tensor materialization");
-        } else {
-          HOLOSCAN_CUDA_CALL_THROW_ERROR(
-              cudaMemcpy(dst, tensor->data(), total_bytes, copy_type),
-              "Failed to copy GPU data to host for tensor materialization");
-        }
+        HOLOSCAN_CUDA_CALL_THROW_ERROR(
+            cudaMemcpy(dst, tensor->data(), total_bytes, copy_type),
+            "Failed to copy GPU data to host for tensor materialization");
applications/pipeline_visualization/cpp/nats_logger.cpp (2)

322-361: Forward the CUDA stream parameter to preserve GPU synchronization.

All three logging method overrides (log_data, log_tensor_data, log_tensormap_data) drop the stream argument when delegating to the base AsyncDataLoggerResource, causing GPU buffers to synchronize on the default stream instead of the producer's stream. This can lead to incorrect behavior or race conditions when logging GPU data.

Apply this diff to forward the stream parameter:

 bool NatsLogger::log_data(const std::any& data, const std::string& unique_id,
                           int64_t acquisition_timestamp,
                           const std::shared_ptr<MetadataDictionary>& metadata,
                           IOSpec::IOType io_type, std::optional<cudaStream_t> stream) {
   const int64_t timestamp_ns = get_timestamp();
   // Check if this message should be logged based on filters and rate limiting
   if (!impl_->should_log_message(this, io_type, unique_id, timestamp_ns)) {
     return true;
   }
   return AsyncDataLoggerResource::log_data(
-      data, unique_id, acquisition_timestamp, metadata, io_type);
+      data, unique_id, acquisition_timestamp, metadata, io_type, stream);
 }
 
 bool NatsLogger::log_tensor_data(const std::shared_ptr<Tensor>& tensor,
                                  const std::string& unique_id, int64_t acquisition_timestamp,
                                  const std::shared_ptr<MetadataDictionary>& metadata,
                                  IOSpec::IOType io_type, std::optional<cudaStream_t> stream) {
   const int64_t timestamp_ns = get_timestamp();
   // Check if this message should be logged based on filters and rate limiting
   if (!impl_->should_log_message(this, io_type, unique_id, timestamp_ns)) {
     return true;
   }
 
   return AsyncDataLoggerResource::log_tensor_data(
-      tensor, unique_id, acquisition_timestamp, metadata, io_type);
+      tensor, unique_id, acquisition_timestamp, metadata, io_type, stream);
 }
 
 bool NatsLogger::log_tensormap_data(const TensorMap& tensor_map, const std::string& unique_id,
                                     int64_t acquisition_timestamp,
                                     const std::shared_ptr<MetadataDictionary>& metadata,
                                     IOSpec::IOType io_type, std::optional<cudaStream_t> stream) {
   const int64_t timestamp_ns = get_timestamp();
   // Check if this message should be logged based on filters and rate limiting
   if (!impl_->should_log_message(this, io_type, unique_id, timestamp_ns)) {
     return true;
   }
 
   return AsyncDataLoggerResource::log_tensormap_data(
-      tensor_map, unique_id, acquisition_timestamp, metadata, io_type);
+      tensor_map, unique_id, acquisition_timestamp, metadata, io_type, stream);
 }

528-550: Propagate publish failures for TensorMap entries.

The loop over tensor_map entries ignores the boolean return value from publish_data (line 546-548), so publish failures (e.g., when the NATS connection is unavailable) are not propagated to callers. This is inconsistent with the TensorData branch (lines 523-526), which correctly returns the publish result.

Apply this diff to propagate failures:

   } else if (entry.type == DataEntry::Type::TensorMapData) {
     auto tensor_map = std::get<holoscan::TensorMap>(entry.data);
     for (const auto& [key, tensor] : tensor_map) {
       ::flatbuffers::FlatBufferBuilder builder;
       auto offset = pipeline_visualization::flatbuffers::CreateMessage(
           builder,
           builder.CreateString(entry.unique_id + "." + key),
           static_cast<pipeline_visualization::flatbuffers::IOType>(entry.io_type),
           entry.acquisition_timestamp,
           entry.emit_timestamp,
           nats_logger_->should_log_tensor_data_content()
               ? pipeline_visualization::flatbuffers::Payload::
                     Payload_pipeline_visualization_flatbuffers_Tensor
               : pipeline_visualization::flatbuffers::Payload::Payload_NONE,
           nats_logger_->should_log_tensor_data_content()
               ? pipeline_visualization::flatbuffers::CreateTensor(builder, tensor, entry.stream)
                     .Union()
               : 0);
       builder.Finish(offset);
-      nats_logger_->impl_->publish_data(nats_logger_->subject_prefix_.get() + ".data",
-                                        builder.GetBufferPointer(),
-                                        builder.GetSize());
+      const bool ok = nats_logger_->impl_->publish_data(
+          nats_logger_->subject_prefix_.get() + ".data",
+          builder.GetBufferPointer(),
+          builder.GetSize());
+      if (!ok) {
+        return false;
+      }
     }
     return true;
🧹 Nitpick comments (6)
applications/pipeline_visualization/start_nats_server.sh (1)

20-20: Quote variables to handle edge cases with spaces.

Variables should be quoted to prevent word-splitting if they contain spaces, even though mktemp -d output is unlikely to include them. This follows shell best practices for robustness.

Apply this diff:

-tmp_dir=$(mktemp -d)
+tmp_dir="$(mktemp -d)"

And:

-cat <<EOF > $tmp_dir/nats-server.conf
+cat <<EOF > "$tmp_dir/nats-server.conf"

Also applies to: 23-23

applications/pipeline_visualization/README.md (2)

128-128: Format URLs as markdown links.

The bare URLs should be formatted as markdown links for better readability and adherence to markdown best practices.

Apply this diff:

-The web interface will be available at: **http://localhost:8050**
+The web interface will be available at: **[http://localhost:8050](http://localhost:8050)**
-1. Open your web browser to http://localhost:8050
+1. Open your web browser to [http://localhost:8050](http://localhost:8050)

Also applies to: 158-158


195-203: Specify language for fenced code block.

The fenced code block should have a language identifier for proper syntax highlighting.

Apply this diff:

-```
+```text
 Message {
   unique_id: string          // Format: "operator_name.port_name"
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

49-61: Consider validating bits divisibility by 8.

The code assumes bits is divisible by 8 when computing bits // 8 for the byte size. While DLPack typically uses standard bit widths (8, 16, 32, 64), adding a validation check would make the code more defensive against unexpected input.

Consider adding validation:

 # Compose NumPy dtype string, e.g., '<i4' for int32 little-endian
 # Format: '<' (little-endian) + kind (i/u/f) + byte_size
 # FlatBuffers uses little-endian byte order by default
+if bits % 8 != 0:
+    raise ValueError(f"Unsupported bit width: {bits} (must be divisible by 8)")
 dtype_str = f"<{kind}{bits // 8}"
applications/pipeline_visualization/visualizer/nats_async.py (1)

176-188: Consider minor style improvements for exception handling.

The implementation is correct and handles errors appropriately. Two optional style improvements:

  1. The return True on line 182 could be in an else block for clearer control flow
  2. Lines 112 and 187: logging.exception already includes exception info, so passing the exception object is redundant

Apply this diff for clearer control flow:

     async def _publish(self, subject: str, payload: bytes) -> bool:
         try:
             if self._use_js:
                 await self._js_context.publish(subject, payload)
             else:
                 await self._connection.publish(subject, payload)
-            return True
         except nats.errors.ConnectionClosedError:
-            logger.exception("Failed to send message since NATS connect was closed")
+            logger.exception("Failed to send message since NATS connection was closed")
             return False
-        except Exception as e:
-            logger.exception(f"Failed to send message to {subject}: {e}")
+        except Exception:
+            logger.exception(f"Failed to send message to {subject}")
             return False
+        else:
+            return True
applications/pipeline_visualization/visualizer/start_visualizer.sh (1)

20-22: Consider more robust path handling.

The hardcoded relative path assumes the script is executed from applications/pipeline_visualization/visualizer/. While the README documents the correct usage, the script could be made more robust by detecting the script's location or the repository root.

Consider using script-relative path resolution:

-# Add the path to the generated flatbuffers files to the PYTHONPATH
-FLATBUFFERS_PATH=$(pwd)/../../../build/pipeline_visualization/applications/pipeline_visualization/flatbuffers/
+# Get the directory containing this script
+SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
+# Add the path to the generated flatbuffers files to the PYTHONPATH
+FLATBUFFERS_PATH=${SCRIPT_DIR}/../../../build/pipeline_visualization/applications/pipeline_visualization/flatbuffers/
 export PYTHONPATH=${PYTHONPATH:-""}:$FLATBUFFERS_PATH
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5a70316 and ce2097b.

⛔ Files ignored due to path filters (1)
  • applications/pipeline_visualization/pipeline_visualization.png is excluded by !**/*.png
📒 Files selected for processing (30)
  • .vscode/launch.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • applications/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.py (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/pydoc.hpp (1 hunks)
  • applications/pipeline_visualization/requirements.txt (1 hunks)
  • applications/pipeline_visualization/schemas/message.fbs (1 hunks)
  • applications/pipeline_visualization/schemas/tensor.fbs (1 hunks)
  • applications/pipeline_visualization/start_nats_server.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/graph_components.py (1 hunks)
  • applications/pipeline_visualization/visualizer/nats_async.py (1 hunks)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/styles.py (1 hunks)
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
  • applications/pipeline_visualization/cpp/metadata.json
  • applications/pipeline_visualization/schemas/message.fbs
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • .vscode/launch.json
  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/visualizer/styles.py
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml
  • applications/pipeline_visualization/schemas/tensor.fbs
  • applications/pipeline_visualization/CMakeLists.txt
  • applications/CMakeLists.txt
🧰 Additional context used
📓 Path-based instructions (1)
**/metadata.json

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/metadata.json: In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category
The category (first tag) in metadata.json must be one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Files:

  • applications/pipeline_visualization/python/metadata.json
🧠 Learnings (10)
📓 Common learnings
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.117Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.
📚 Learning: 2025-11-11T15:40:28.117Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.117Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.

Applied to files:

  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
  • applications/pipeline_visualization/cpp/create_tensor.cpp
  • applications/pipeline_visualization/cpp/create_tensor.hpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/metadata.json : Each application must include a metadata.json that follows the application schema

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
📚 Learning: 2025-10-20T17:54:32.001Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-20T17:54:32.001Z
Learning: Applies to **/metadata.json : In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/README.md
  • applications/pipeline_visualization/visualizer/visualizer_static.py
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to applications/CMakeLists.txt : Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/operators/*/*.{py,cpp} : Main operator filename should match the directory name with the appropriate extension

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to workflow/CMakeLists.txt : Register workflows in workflow/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/operators/**/*.{py,cpp,hpp} : Operator class names should be TitleCase with an 'Op' suffix (e.g., AdaptiveThresholdingOp)

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to operators/CMakeLists.txt : Register operators in operators/CMakeLists.txt using add_holohub_operator(...) and optional DEPENDS EXTENSIONS

Applied to files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
🧬 Code graph analysis (13)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (2)
operators/grpc_operators/python/common/tensor_proto.py (1)
  • DLDataTypeCode (46-51)
operators/medical_imaging/core/io_type.py (1)
  • IOType (19-25)
applications/pipeline_visualization/python/nats_logger_pybind.cpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (3)
  • tensor (89-93)
  • data (72-76)
  • data (123-127)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (55-57)
applications/pipeline_visualization/visualizer/nats_async.py (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (229-243)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (219-233)
applications/pipeline_visualization/python/pipeline_visualization.py (4)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (9)
  • spec (64-64)
  • spec (64-64)
  • spec (134-137)
  • spec (134-134)
  • spec (198-198)
  • spec (198-198)
  • TimeSeriesApp (230-235)
  • main (278-374)
  • main (278-278)
applications/pipeline_visualization/cpp/nats_logger.cpp (4)
  • setup (288-304)
  • setup (288-288)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (229-243)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (219-233)
applications/pipeline_visualization/visualizer/graph_components.py (2)
operators/medical_imaging/core/io_type.py (1)
  • IOType (19-25)
applications/pipeline_visualization/cpp/nats_logger.hpp (2)
  • data (72-76)
  • data (123-127)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (9)
  • data (72-76)
  • data (123-127)
  • spec (54-54)
  • tensor (89-93)
  • tensor_map (106-110)
  • AsyncNatsBackend (145-145)
  • AsyncNatsBackend (146-146)
  • entry (168-168)
  • entry (178-178)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
  • CreateTensor (25-111)
  • CreateTensor (25-27)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.cpp (6)
  • initialize (312-320)
  • initialize (312-312)
  • initialize (489-491)
  • initialize (489-489)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/nats_logger.hpp (4)
  • spec (54-54)
  • data (72-76)
  • data (123-127)
  • tensor (89-93)
applications/pipeline_visualization/python/pydoc.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/start_nats_server.sh (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (229-243)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (219-233)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (3)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-188)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_static.py (4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-69)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-188)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (4)
  • Visualizer (34-243)
  • update_subject (102-124)
  • update_source (133-227)
  • run (229-243)
🪛 Clang (14.0.6)
applications/pipeline_visualization/python/nats_logger_pybind.cpp

[error] 18-18: 'pybind11/pybind11.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/nats_logger.hpp

[error] 21-21: 'holoscan/core/component_spec.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/create_tensor.hpp

[error] 21-21: 'flatbuffers/tensor_generated.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/pipeline_visualization.cpp

[error] 32-32: 'getopt.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/pydoc.hpp

[error] 21-21: 'macros.hpp' file not found

(clang-diagnostic-error)

🪛 markdownlint-cli2 (0.18.1)
applications/pipeline_visualization/README.md

128-128: Bare URL used

(MD034, no-bare-urls)


158-158: Bare URL used

(MD034, no-bare-urls)


195-195: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py

56-56: Avoid specifying long messages outside the exception class

(TRY003)

applications/pipeline_visualization/visualizer/nats_async.py

112-112: Redundant exception object included in logging.exception call

(TRY401)


182-182: Consider moving this statement to an else block

(TRY300)


187-187: Redundant exception object included in logging.exception call

(TRY401)

applications/pipeline_visualization/python/pipeline_visualization.py

54-54: Unused method argument: op_input

(ARG002)


54-54: Unused method argument: context

(ARG002)


89-89: Unused method argument: context

(ARG002)


125-125: Unused method argument: op_output

(ARG002)


125-125: Unused method argument: context

(ARG002)

applications/pipeline_visualization/visualizer/graph_components.py

1-1: Shebang is present but file is not executable

(EXE001)

applications/pipeline_visualization/visualizer/visualizer_dynamic.py

240-240: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/visualizer/visualizer_static.py

230-230: Possible binding to all interfaces

(S104)

🪛 Shellcheck (0.11.0)
applications/pipeline_visualization/start_nats_server.sh

[error] 34-34: Double quote array expansions to avoid re-splitting elements.

(SC2068)

🔇 Additional comments (9)
.vscode/tasks.json (1)

518-537: Build task structure is consistent and well-integrated.

The new pipeline_visualization build task follows the established pattern used throughout the file. The --language python argument is appropriate given the PR adds Python example components alongside C++. The task is correctly positioned alphabetically and will properly integrate with the launch configurations (which reference it via preLaunchTask).

applications/pipeline_visualization/README.md (1)

1-326: Excellent documentation quality!

The README is comprehensive, well-structured, and provides clear instructions for users. The architecture diagram, step-by-step usage guide, troubleshooting section, and advanced usage examples make this a high-quality documentation resource.

applications/pipeline_visualization/python/metadata.json (1)

1-38: LGTM! Metadata follows guidelines.

The metadata.json correctly follows the application schema with "Visualization" as the first tag (category), which is an approved category per coding guidelines. All required fields are present and properly structured.

As per coding guidelines.

applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

25-69: Well-implemented tensor conversion function.

The function correctly maps DLDataType components to NumPy dtypes, handles both scalar and vector (multi-lane) types, and provides clear error messages for unsupported types. The documentation is thorough and the logic is sound.

applications/pipeline_visualization/visualizer/nats_async.py (1)

73-188: Solid asynchronous NATS wrapper implementation.

The NatsAsync class provides a clean thread-safe API for NATS messaging with proper error handling, support for both JetStream and standard modes, and graceful shutdown. Previous critical issues have been addressed.

applications/pipeline_visualization/python/pipeline_visualization.yaml (1)

1-26: LGTM! Clear and well-documented configuration.

The YAML configuration provides sensible defaults for the NATS logger with helpful commented examples for pattern-based filtering. The structure is clean and easy to understand.

applications/pipeline_visualization/visualizer/start_visualizer.sh (1)

1-24: Good shell script structure.

The script has proper error handling with set -eu, uses parameter expansion for default arguments, and follows shell scripting best practices. The PYTHONPATH setup correctly exposes the generated FlatBuffers definitions.

applications/pipeline_visualization/python/pydoc.hpp (1)

18-124: Comprehensive and well-structured documentation.

The pydoc provides thorough documentation for NatsLogger, covering all parameters, behaviors, filtering logic, and queue configurations. The notes section effectively clarifies unique_id format and pattern matching logic for both distributed and non-distributed applications.

Note: The static analysis error for macros.hpp is a false positive - the header is available in the build context and properly referenced by the CMake configuration.

applications/pipeline_visualization/cpp/create_tensor.hpp (1)

18-61: Well-documented tensor serialization header.

The header provides comprehensive documentation for the CreateTensor function, covering supported device types, parameters, return values, and potential exceptions. The function signature appropriately handles FlatBuffers serialization with optional CUDA stream support.

Note: The static analysis error for flatbuffers/tensor_generated.h is a false positive. Per learnings, the include path is correct because the CMake target pipeline_visualization_flatbuffers_schemas generates the headers and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by linking targets.

Based on learnings.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
applications/pipeline_visualization/cpp/nats_logger.cpp (1)

300-301: Fix non-routable default NATS URL.

The default URL "nats://0.0.0.0:4222" is invalid for client connections. 0.0.0.0 is a non-routable meta-address used for server binding, not client connections. Your own documentation on line 121 shows the correct format.

Apply this diff:

-  spec.param(
-      nats_url_, "nats_url", "NATS URL", "NATS server URL", std::string("nats://0.0.0.0:4222"));
+  spec.param(
+      nats_url_, "nats_url", "NATS URL", "NATS server URL", std::string("nats://localhost:4222"));
🧹 Nitpick comments (8)
applications/pipeline_visualization/schemas/tensor.fbs (1)

20-24: Consider adding remaining DLPack types for completeness.

The current DLDataTypeCode enum includes the essential types for this demo (int, uint, float). However, the DLPack standard includes additional types that may be useful for future extensions:

  • kDLBfloat = 4 (for ML bfloat16)
  • kDLComplex = 5 (for complex numbers)
  • kDLOpaqueHandle = 3 (for opaque types)

See operators/grpc_operators/python/common/tensor_proto.py lines 45-50 for reference.

Apply this diff if you want to align with the full DLPack standard:

 enum DLDataTypeCode : uint8 {
   kDLInt = 0,
   kDLUInt = 1,
   kDLFloat = 2,
+  kDLOpaqueHandle = 3,
+  kDLBfloat = 4,
+  kDLComplex = 5,
 }
applications/pipeline_visualization/visualizer/nats_async.py (2)

78-80: Fix type hints for connection attributes.

The type hints for _connection and _js_context are incorrect. They should indicate optional types since they start as None and are assigned later.

Apply this diff:

-        self._connection: None = None  # NATS connection
+        self._connection: nats.Connection | None = None  # NATS connection
         self._use_js: bool = False
-        self._js_context: nats.js.JetStreamContext = None
+        self._js_context: nats.js.JetStreamContext | None = None

146-148: Consider handling queue overflow in message handler.

The _async_sub_handler puts messages into the queue with block=False, which will raise queue.Full if the queue is at capacity. This exception would propagate to the NATS client's callback handler, potentially disrupting the subscription.

Apply this diff to handle queue overflow gracefully:

     async def _async_sub_handler(self, msg: nats.aio.msg.Msg):
         if msg.subject in self._subscriptions:
-            self._subscriptions[msg.subject]._queue.put(msg.data, block=False)
+            try:
+                self._subscriptions[msg.subject]._queue.put(msg.data, block=False)
+            except queue.Full:
+                logger.warning(f"Message queue full for subject {msg.subject}, dropping message")
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)

237-240: Binding to all interfaces is acceptable for local development.

The NATS and Dash servers bind to 0.0.0.0, which accepts connections from any network interface. This is appropriate for local development and demo scenarios but should be restricted in production environments by using firewall rules or binding to 127.0.0.1 instead.

applications/pipeline_visualization/visualizer/graph_components.py (1)

1-1: Remove unnecessary shebang from module file.

This file is imported as a module, not executed directly, so the shebang line is unnecessary. Consider removing it to avoid confusion.

applications/pipeline_visualization/visualizer/visualizer_static.py (1)

227-230: Binding to all interfaces is acceptable for local development.

The NATS and Dash servers bind to 0.0.0.0, which accepts connections from any network interface. This is appropriate for local development and demo scenarios but should be restricted in production environments by using firewall rules or binding to 127.0.0.1 instead.

applications/pipeline_visualization/python/pipeline_visualization.py (1)

75-75: Consider consistent tensor dictionary keys throughout the pipeline.

The pipeline uses different dictionary keys at each stage:

  • SourceOp emits dict(wave=...)
  • ModulateOp receives "wave" and emits dict(modulated_signal=...)
  • SinkOp receives any key via next(iter(...))

While functionally correct, using a consistent key (e.g., "data" or "signal") throughout would improve clarity and maintainability.

Also applies to: 111-111, 128-128

applications/pipeline_visualization/cpp/nats_logger.cpp (1)

127-131: Consider simplifying initialization pattern.

The immediately-invoked lambda pattern for initializing unique_ptr members is correct but verbose. Consider direct initialization for clarity:

Example for nats_options:

natsOptions* raw_options = nullptr;
NATS_CALL(natsOptions_Create(&raw_options));
nats_options.reset(raw_options);

However, the current pattern provides clearer exception safety guarantees and is acceptable.

Also applies to: 144-148, 194-199

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ce2097b and b9c7d7f.

⛔ Files ignored due to path filters (1)
  • applications/pipeline_visualization/pipeline_visualization.png is excluded by !**/*.png
📒 Files selected for processing (30)
  • .vscode/launch.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • applications/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.py (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/pydoc.hpp (1 hunks)
  • applications/pipeline_visualization/requirements.txt (1 hunks)
  • applications/pipeline_visualization/schemas/message.fbs (1 hunks)
  • applications/pipeline_visualization/schemas/tensor.fbs (1 hunks)
  • applications/pipeline_visualization/start_nats_server.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/graph_components.py (1 hunks)
  • applications/pipeline_visualization/visualizer/nats_async.py (1 hunks)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/styles.py (1 hunks)
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh
  • applications/pipeline_visualization/cpp/metadata.json
  • .vscode/launch.json
  • applications/pipeline_visualization/cpp/create_tensor.cpp
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml
  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/requirements.txt
  • applications/pipeline_visualization/visualizer/styles.py
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/pipeline_visualization/python/pipeline_visualization.yaml
  • applications/pipeline_visualization/start_nats_server.sh
  • .vscode/tasks.json
🧰 Additional context used
📓 Path-based instructions (2)
applications/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Files:

  • applications/CMakeLists.txt
**/metadata.json

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/metadata.json: In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category
The category (first tag) in metadata.json must be one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Files:

  • applications/pipeline_visualization/python/metadata.json
🧠 Learnings (18)
📓 Common learnings
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.117Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to applications/CMakeLists.txt : Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Each application must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to workflow/CMakeLists.txt : Register workflows in workflow/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Applications should include a testing section in CMakeLists.txt using CTest for functional tests

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to pkg/CMakeLists.txt : Register packages in pkg/CMakeLists.txt using add_holohub_package(...)

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to gxf_extensions/CMakeLists.txt : Register extensions in gxf_extensions/CMakeLists.txt using add_holohub_extension(...)

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to operators/CMakeLists.txt : Register operators in operators/CMakeLists.txt using add_holohub_operator(...) and optional DEPENDS EXTENSIONS

Applied to files:

  • applications/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/pkg/*/CMakeLists.txt : Each package must provide a CMakeLists.txt for package configuration

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/workflows/*/CMakeLists.txt : Each workflow must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/operators/*/*.{py,cpp} : Main operator filename should match the directory name with the appropriate extension

Applied to files:

  • applications/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-11-11T15:40:28.117Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.117Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.

Applied to files:

  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
  • applications/pipeline_visualization/schemas/message.fbs
  • applications/pipeline_visualization/schemas/tensor.fbs
  • applications/pipeline_visualization/cpp/create_tensor.hpp
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/README.md
  • applications/pipeline_visualization/visualizer/visualizer_static.py
  • applications/pipeline_visualization/python/pipeline_visualization.py
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/metadata.json : Each application must include a metadata.json that follows the application schema

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
📚 Learning: 2025-10-20T17:54:32.001Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-20T17:54:32.001Z
Learning: Applies to **/metadata.json : In metadata.json, within one of {application, operator, tutorial, benchmark, workflow, gxf_extension}, ensure there is a tags array and treat the first element as the category

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
📚 Learning: 2025-10-30T19:04:41.239Z
Learnt from: bhashemian
Repo: nvidia-holoscan/holohub PR: 1203
File: applications/video_streaming/video_streaming_server/cpp/metadata.json:42-51
Timestamp: 2025-10-30T19:04:41.239Z
Learning: In the video_streaming application structure, umbrella metadata (applications/video_streaming/metadata.json) uses language-qualified mode names like "server_cpp" and "server_python" to distinguish between implementations, while component-specific metadata files (e.g., applications/video_streaming/video_streaming_server/cpp/metadata.json) use simple mode names like "server" since they are already scoped to a specific language implementation by their directory structure.

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
📚 Learning: 2025-10-20T17:54:32.001Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-20T17:54:32.001Z
Learning: Applies to **/metadata.json : The category (first tag) in metadata.json must be one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
🧬 Code graph analysis (11)
applications/pipeline_visualization/python/pydoc.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/python/nats_logger_pybind.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
operators/grpc_operators/python/common/tensor_proto.py (1)
  • DLDataTypeCode (46-51)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (3)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-71)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-189)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/nats_async.py (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (229-243)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (219-233)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.cpp (6)
  • initialize (312-320)
  • initialize (312-312)
  • initialize (489-491)
  • initialize (489-489)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/nats_logger.hpp (4)
  • spec (54-54)
  • data (72-76)
  • data (123-127)
  • tensor (89-93)
applications/pipeline_visualization/visualizer/visualizer_static.py (4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-71)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_graph (28-133)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-189)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (2)
  • update_source (133-227)
  • run (229-243)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (9)
  • data (72-76)
  • data (123-127)
  • spec (54-54)
  • tensor (89-93)
  • tensor_map (106-110)
  • AsyncNatsBackend (145-145)
  • AsyncNatsBackend (146-146)
  • entry (168-168)
  • entry (178-178)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
  • CreateTensor (25-111)
  • CreateTensor (25-27)
applications/pipeline_visualization/python/pipeline_visualization.py (5)
applications/pipeline_visualization/cpp/nats_logger.hpp (2)
  • NatsLogger (47-47)
  • spec (54-54)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (15)
  • spec (64-64)
  • spec (64-64)
  • spec (134-137)
  • spec (134-134)
  • spec (198-198)
  • spec (198-198)
  • op_input (139-181)
  • op_input (139-140)
  • op_input (200-206)
  • op_input (200-201)
  • wave (73-73)
  • ModulateOp (122-122)
  • TimeSeriesApp (230-235)
  • main (278-374)
  • main (278-278)
applications/pipeline_visualization/cpp/nats_logger.cpp (4)
  • setup (288-304)
  • setup (288-288)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (229-243)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (219-233)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
applications/pipeline_visualization/visualizer/graph_components.py (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (2)
  • data (72-76)
  • data (123-127)
🪛 Clang (14.0.6)
applications/pipeline_visualization/python/pydoc.hpp

[error] 21-21: 'macros.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/nats_logger_pybind.cpp

[error] 18-18: 'pybind11/pybind11.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/pipeline_visualization.cpp

[error] 32-32: 'getopt.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/nats_logger.hpp

[error] 21-21: 'holoscan/core/component_spec.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/create_tensor.hpp

[error] 21-21: 'flatbuffers/tensor_generated.h' file not found

(clang-diagnostic-error)

🪛 Ruff (0.14.4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py

56-56: Avoid specifying long messages outside the exception class

(TRY003)


62-62: Avoid specifying long messages outside the exception class

(TRY003)

applications/pipeline_visualization/visualizer/visualizer_dynamic.py

240-240: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/visualizer/visualizer_static.py

230-230: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/python/pipeline_visualization.py

54-54: Unused method argument: op_input

(ARG002)


54-54: Unused method argument: context

(ARG002)


89-89: Unused method argument: context

(ARG002)


125-125: Unused method argument: op_output

(ARG002)


125-125: Unused method argument: context

(ARG002)

applications/pipeline_visualization/visualizer/graph_components.py

1-1: Shebang is present but file is not executable

(EXE001)

🔇 Additional comments (19)
applications/CMakeLists.txt (1)

95-95: LGTM!

The new application entry is correctly positioned in alphabetical order and follows the expected pattern for application registration.

applications/pipeline_visualization/README.md (3)

1-106: LGTM!

The overview, architecture diagram, and prerequisites are well-documented and provide clear guidance for users. The Mermaid diagram effectively illustrates the data flow between components.


107-214: LGTM!

The usage instructions are clear and well-structured, with good examples of command-line options and configuration. The data format documentation helps users understand the message structure.


215-332: LGTM!

The troubleshooting, customization, and advanced usage sections provide valuable guidance for users. The performance considerations are appropriate and the customization examples are practical.

applications/pipeline_visualization/python/metadata.json (1)

1-38: LGTM!

The metadata structure is complete and follows the required schema. The category tag "Visualization" is from the approved list, and all required fields are properly specified.

applications/pipeline_visualization/schemas/tensor.fbs (1)

57-64: LGTM!

The Tensor table definition is well-structured and follows DLPack conventions. The use of native_inline for DLDataType and DLDevice is appropriate for performance.

applications/pipeline_visualization/python/pydoc.hpp (2)

27-95: LGTM!

The documentation is comprehensive and well-structured, covering all parameters with clear descriptions and appropriate defaults. The parameter documentation follows Python docstring conventions effectively.


97-119: LGTM!

The Notes section provides valuable context about unique_id formatting and pattern matching logic. The examples for distributed vs. non-distributed applications are particularly helpful.

applications/pipeline_visualization/visualizer/nats_async.py (1)

102-142: LGTM!

The async event loop and control queue pattern is well-implemented. The error handling for connection failures and the early return on connection errors prevent downstream issues.

applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)

25-71: LGTM!

The tensor-to-numpy conversion logic is correct and handles the essential DLPack types. The dtype mapping, endianness handling, and multi-lane support are all properly implemented. The ValueError messages are descriptive and helpful for debugging.

applications/pipeline_visualization/schemas/message.fbs (1)

18-37: LGTM!

The message schema is well-designed with appropriate metadata fields. The use of a union for Payload provides extensibility for future payload types beyond tensors. The timestamp fields and unique_id provide sufficient tracking information for visualization.

applications/pipeline_visualization/python/nats_logger_pybind.cpp (1)

58-58: Verify the correct default value for large_data_worker_sleep_time.

Both the PyNatsLogger constructor and the pybind11 binding now consistently use 50000 for large_data_worker_sleep_time. However, a previous review comment suggested this should be 200000 to match the C++ constructor. Please verify the correct default value in the NatsLogger C++ implementation and ensure consistency across the bindings.

Also applies to: 147-147

applications/pipeline_visualization/cpp/nats_logger.cpp (7)

48-60: LGTM: Centralized error handling via macro.

The NATS_CALL macro provides clean, consistent error handling with detailed diagnostics.


72-89: LGTM: Clean RAII pattern for NATS resources.

The custom deleter template and unique_ptr aliases ensure automatic cleanup of NATS objects.


234-267: LGTM: Rate limiting logic is sound.

The function correctly handles edge cases including non-positive publish rates (treated as "no throttling") and first-time messages for each unique_id.


322-361: LGTM: CUDA stream is now preserved.

All three override methods correctly forward the stream parameter to the base AsyncDataLoggerResource methods, ensuring GPU synchronization uses the producer's CUDA stream.


363-484: LGTM: GXF Entity handling is comprehensive.

The function properly handles both nvidia::gxf::Entity and holoscan::gxf::Entity, extracting and logging Tensor and VideoBuffer components with appropriate error handling.


527-552: LGTM: Publish failures are now propagated.

The TensorMapData branch now checks the return value of publish_data and returns false on failure, ensuring consistent error handling across all data types.


162-177: LGTM: Null reply subject is handled safely.

The function correctly guards against nullptr from natsMsg_GetReply by checking the pointer before constructing the string.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

28 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
applications/pipeline_visualization/python/CMakeLists.txt (1)

23-25: Remove redundant linking handled by macro.

The pybind11_add_holohub_module macro already links the Python module to the nats_logger target internally. This explicit target_link_libraries call is redundant.

Based on learnings

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b9c7d7f and 7939339.

⛔ Files ignored due to path filters (1)
  • applications/pipeline_visualization/pipeline_visualization.png is excluded by !**/*.png
📒 Files selected for processing (30)
  • .vscode/launch.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • applications/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.py (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/pydoc.hpp (1 hunks)
  • applications/pipeline_visualization/requirements.txt (1 hunks)
  • applications/pipeline_visualization/schemas/message.fbs (1 hunks)
  • applications/pipeline_visualization/schemas/tensor.fbs (1 hunks)
  • applications/pipeline_visualization/start_nats_server.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/graph_components.py (1 hunks)
  • applications/pipeline_visualization/visualizer/nats_async.py (1 hunks)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/styles.py (1 hunks)
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml
🚧 Files skipped from review as they are similar to previous changes (9)
  • .vscode/tasks.json
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml
  • applications/pipeline_visualization/start_nats_server.sh
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • applications/CMakeLists.txt
  • applications/pipeline_visualization/cpp/metadata.json
  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/visualizer/start_visualizer.sh
  • applications/pipeline_visualization/schemas/message.fbs
🧰 Additional context used
🧠 Learnings (12)
📓 Common learnings
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/pkg/*/CMakeLists.txt : Each package must provide a CMakeLists.txt for package configuration

Applied to files:

  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to applications/CMakeLists.txt : Register applications in applications/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to pkg/CMakeLists.txt : Register packages in pkg/CMakeLists.txt using add_holohub_package(...)

Applied to files:

  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
  • applications/pipeline_visualization/README.md
  • applications/pipeline_visualization/python/pipeline_visualization.py
📚 Learning: 2025-11-11T15:40:28.117Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.117Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/create_tensor.cpp
  • applications/pipeline_visualization/schemas/tensor.fbs
  • applications/pipeline_visualization/cpp/create_tensor.hpp
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Each application must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/applications/*/CMakeLists.txt : Applications should include a testing section in CMakeLists.txt using CTest for functional tests

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/workflows/*/CMakeLists.txt : Each workflow must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to holohub/gxf_extensions/*/CMakeLists.txt : Each GXF extension must provide a CMakeLists.txt for build system integration

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
📚 Learning: 2025-10-22T16:16:16.226Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-10-22T16:16:16.226Z
Learning: Applies to workflow/CMakeLists.txt : Register workflows in workflow/CMakeLists.txt using add_holohub_application(...) and optional DEPENDS OPERATORS

Applied to files:

  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
🧬 Code graph analysis (11)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (3)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-71)
applications/pipeline_visualization/visualizer/graph_components.py (2)
  • create_app_layout (143-215)
  • create_graph (35-140)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-192)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/python/nats_logger_pybind.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/visualizer_static.py (4)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-71)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_app_layout (143-215)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-192)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (4)
  • Visualizer (33-198)
  • update_subject (57-79)
  • update_source (88-182)
  • run (184-198)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (3)
  • tensor (89-93)
  • data (72-76)
  • data (123-127)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (55-57)
applications/pipeline_visualization/visualizer/nats_async.py (2)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (184-198)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (171-185)
applications/pipeline_visualization/cpp/nats_logger.cpp (3)
applications/pipeline_visualization/cpp/nats_logger.hpp (9)
  • data (72-76)
  • data (123-127)
  • spec (54-54)
  • tensor (89-93)
  • tensor_map (106-110)
  • AsyncNatsBackend (145-145)
  • AsyncNatsBackend (146-146)
  • entry (168-168)
  • entry (178-178)
applications/pipeline_visualization/cpp/create_tensor.cpp (2)
  • CreateTensor (25-111)
  • CreateTensor (25-27)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (55-57)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.cpp (6)
  • initialize (310-318)
  • initialize (310-310)
  • initialize (487-489)
  • initialize (487-487)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/nats_logger.hpp (4)
  • spec (54-54)
  • data (72-76)
  • data (123-127)
  • tensor (89-93)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
applications/pipeline_visualization/python/pipeline_visualization.py (4)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (13)
  • spec (64-64)
  • spec (64-64)
  • spec (134-137)
  • spec (134-134)
  • spec (198-198)
  • spec (198-198)
  • op_input (139-181)
  • op_input (139-140)
  • op_input (200-206)
  • op_input (200-201)
  • TimeSeriesApp (230-235)
  • main (282-377)
  • main (282-282)
applications/pipeline_visualization/cpp/nats_logger.cpp (4)
  • setup (286-302)
  • setup (286-286)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1)
  • run (184-198)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)
  • run (171-185)
applications/pipeline_visualization/python/pydoc.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
operators/grpc_operators/python/common/tensor_proto.py (1)
  • DLDataTypeCode (46-51)
🪛 Clang (14.0.6)
applications/pipeline_visualization/python/nats_logger_pybind.cpp

[error] 18-18: 'pybind11/pybind11.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/pipeline_visualization.cpp

[error] 32-32: 'getopt.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/create_tensor.hpp

[error] 21-21: 'flatbuffers/tensor_generated.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/nats_logger.hpp

[error] 21-21: 'holoscan/core/component_spec.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/pydoc.hpp

[error] 21-21: 'macros.hpp' file not found

(clang-diagnostic-error)

🪛 LanguageTool
applications/pipeline_visualization/README.md

[duplication] ~11-~11: Possible typo: you repeated a word.
Context: ...s data (sine waves) and publishes it to NATS 2. NATS Server: A message broker that handles...

(ENGLISH_WORD_REPEAT_RULE)


[uncategorized] ~228-~228: Use a comma before ‘but’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ot Updating Problem: Web page loads but graphs don't update Solution: 1. C...

(COMMA_COMPOUND_SENTENCE_2)

🪛 Ruff (0.14.4)
applications/pipeline_visualization/visualizer/visualizer_dynamic.py

195-195: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/visualizer/visualizer_static.py

182-182: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/python/pipeline_visualization.py

55-55: Unused method argument: op_input

(ARG002)


55-55: Unused method argument: context

(ARG002)


90-90: Unused method argument: context

(ARG002)


126-126: Unused method argument: op_output

(ARG002)


126-126: Unused method argument: context

(ARG002)

applications/pipeline_visualization/visualizer/tensor_to_numpy.py

56-56: Avoid specifying long messages outside the exception class

(TRY003)


62-62: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (2)
applications/pipeline_visualization/requirements.txt (1)

1-19: LGTM!

The requirements file is well-structured with clear comments for each dependency. The version constraints are appropriate for the pipeline visualization example: NATS messaging (nats-py), web UI framework and plotting (dash, plotly), numerical computing and data manipulation (numpy, pandas), and FlatBuffers for schema serialization.

The numpy constraint was previously discussed in an earlier review round; the clarification that pandas 2.3.3+ supports numpy 2.0 resolves any concerns.

applications/pipeline_visualization/visualizer/graph_components.py (1)

62-67: Verify IOType enum comparison pattern.

The comparison pipeline_visualization.flatbuffers.IOType.IOType().kOutput instantiates an IOType object to access the enum value. Confirm this matches the FlatBuffers Python API for the generated code.

Typical FlatBuffers enum access would be IOType.kOutput directly. If instantiation is required, consider assigning it once rather than per message.

@AndreasHeumann AndreasHeumann force-pushed the ah/pipeline_vis branch 2 times, most recently from 74fd5a6 to 4e54116 Compare November 12, 2025 13:52
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)

179-190: Confirm exposure of Dash UI on all interfaces is acceptable.

Running the Dash server on host="0.0.0.0" exposes the visualizer on all network interfaces, which is useful for remote access but can be risky on shared or unsecured networks. If this is intended only for local development, consider binding to 127.0.0.1 by default or making the host/port configurable via environment variables or CLI flags, and documenting that in the README.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cfbb776 and fd0f1f7.

📒 Files selected for processing (2)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}: All code must adhere to Holoscan SDK coding standards including proper error handling and validation
Use descriptive English naming for functions, variables, and components; minimize acronyms and brand names
Include inline comments for complex logic in code

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python code must pass linting checks via ./holohub lint

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp,cmake}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

All code must compile and build successfully on target platforms before submission

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
{operators,applications,workflows,gxf_extensions,tutorials}/**/README.md

📄 CodeRabbit inference engine (CONTRIBUTING.md)

README.md must include purpose, usage instructions, requirements, examples, and architecture overview

Files:

  • applications/pipeline_visualization/README.md
**/*.md

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Use Holoscan SDK glossary terms when referring to HoloHub-specific locations in documentation

Files:

  • applications/pipeline_visualization/README.md
🧠 Learnings (4)
📓 Common learnings
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.150Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
  • applications/pipeline_visualization/README.md
📚 Learning: 2025-11-24T16:28:06.280Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.280Z
Learning: Applies to {operators,applications,workflows,gxf_extensions,tutorials}/**/README.md : README.md must include purpose, usage instructions, requirements, examples, and architecture overview

Applied to files:

  • applications/pipeline_visualization/README.md
📚 Learning: 2025-11-17T21:37:54.245Z
Learnt from: bhashemian
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/README.md:135-137
Timestamp: 2025-11-17T21:37:54.245Z
Learning: In the Holohub repository, when both Python and C++ applications are available for the same application, the Holohub CLI defaults to Python if the `--language` argument is not provided. Users can explicitly specify `--language cpp` or `--language python` to select the desired implementation.

Applied to files:

  • applications/pipeline_visualization/README.md
🪛 LanguageTool
applications/pipeline_visualization/README.md

[style] ~189-~189: This phrase is redundant. Consider using “outside”.
Context: ... container. Vince the visualizer is run outside of the Holohub container, its dependencies...

(OUTSIDE_OF)

🪛 Ruff (0.14.6)
applications/pipeline_visualization/visualizer/visualizer_static.py

190-190: Possible binding to all interfaces

(S104)

🔇 Additional comments (2)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)

33-177: Solid, resilient message-processing loop.

The Visualizer class and update_source callback look well-structured: you safely handle missing NATS client (PreventUpdate), unknown payload types, unknown unique_ids, and tensor_to_numpy conversion failures without crashing the Dash app. The use of Patch for partial updates is also appropriate for high-frequency updates.

applications/pipeline_visualization/README.md (1)

1-110: README structure and coverage look good.

This README clearly explains purpose, architecture, quick-start steps, CLI options, and requirements, so it satisfies the Holohub guidelines for application READMEs (purpose, usage, requirements, examples, and architecture overview). The note about Holohub CLI defaulting to Python when both languages are available is also consistent with documented behavior.

Based on learnings, this aligns with the documented Holohub CLI language selection behavior.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

29 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

29 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
applications/pipeline_visualization/visualizer/visualizer_static.py (2)

55-82: Guard against uninitialized NATS client and None subject in update_subject.

Two small robustness gaps here:

  • If Visualizer is embedded and run() is never called, self._nats_inst remains None, so subscribe/unsubscribe will raise AttributeError the first time the connect button is clicked.
  • set_props("connect", {"disabled": len(subject) == 0}) will raise TypeError if Dash ever passes subject=None (e.g., initial render or programmatic reset), since len(None) is invalid.

To make the callback safer without changing behavior in the normal path, you can lightly guard both:

         @self._app.callback(
             Input("subject", "value"),
             Input("connect", "n_clicks"),
         )
         def update_subject(subject, n_clicks):
@@
-            if ctx.triggered_id == "connect":
-                if n_clicks % 2 == 1:
-                    # Connect: subscribe to the NATS subject with ".data" suffix
-                    self._nats_inst.subscribe(subject + ".data")
-                    self._subject = subject
-                    set_props("subject", {"disabled": True})
-                    set_props("connect", {"children": "Disconnect"})
-                else:
-                    # Disconnect: unsubscribe from the NATS subject
-                    self._nats_inst.unsubscribe(subject + ".data")
-                    self._subject = None
-                    set_props("subject", {"disabled": False})
-                    set_props("connect", {"children": "Connect"})
-            # Disable connect button if subject field is empty
-            set_props("connect", {"disabled": len(subject) == 0})
+            if ctx.triggered_id == "connect":
+                if self._nats_inst is None:
+                    logger.warning("Connect clicked before NATS client was initialized; ignoring.")
+                elif not subject:
+                    logger.warning("Connect clicked with empty subject; ignoring.")
+                elif n_clicks % 2 == 1:
+                    # Connect: subscribe to the NATS subject with ".data" suffix
+                    self._nats_inst.subscribe(subject + ".data")
+                    self._subject = subject
+                    set_props("subject", {"disabled": True})
+                    set_props("connect", {"children": "Disconnect"})
+                else:
+                    # Disconnect: unsubscribe from the NATS subject
+                    self._nats_inst.unsubscribe(subject + ".data")
+                    self._subject = None
+                    set_props("subject", {"disabled": False})
+                    set_props("connect", {"children": "Connect"})
+            # Disable connect button if subject field is empty or None
+            set_props("connect", {"disabled": not subject})

This keeps the happy path unchanged while avoiding crashes in less typical usage patterns.


179-190: Consider not hard-coding 0.0.0.0 for the Dash server host (lint S104).

Binding the web UI to all interfaces is convenient for Docker/cluster setups, but it’s also what Ruff flags as S104. If this example may ever run outside a controlled/network-isolated environment, it’s safer to bind to 127.0.0.1 by default or make the host configurable via an environment variable:

-        self._app.run(debug=False, host="0.0.0.0", port=8050)
+        import os
+
+        host = os.getenv("PIPELINE_VISUALIZER_HOST", "127.0.0.1")
+        port = int(os.getenv("PIPELINE_VISUALIZER_PORT", "8050"))
+        self._app.run(debug=False, host=host, port=port)

This keeps the current behavior easy to restore (by setting PIPELINE_VISUALIZER_HOST=0.0.0.0) while satisfying the security lint in more locked-down environments.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fd0f1f7 and c3a0f90.

📒 Files selected for processing (2)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • applications/pipeline_visualization/README.md
🧰 Additional context used
📓 Path-based instructions (3)
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}: All code must adhere to Holoscan SDK coding standards including proper error handling and validation
Use descriptive English naming for functions, variables, and components; minimize acronyms and brand names
Include inline comments for complex logic in code

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python code must pass linting checks via ./holohub lint

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp,cmake}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

All code must compile and build successfully on target platforms before submission

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.150Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.
🧬 Code graph analysis (1)
applications/pipeline_visualization/visualizer/visualizer_static.py (3)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-93)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_app_layout (143-215)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-192)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
🪛 Ruff (0.14.6)
applications/pipeline_visualization/visualizer/visualizer_static.py

190-190: Possible binding to all interfaces

(S104)

🔇 Additional comments (1)
applications/pipeline_visualization/visualizer/visualizer_static.py (1)

119-177: Improved robustness in update_source looks solid.

The callback now:

  • Skips unknown payload types with a logged warning instead of raising.
  • Handles unknown unique_ids via the startswith search and a warning, rather than crashing.
  • Wraps tensor_to_numpy in a try/except ValueError and logs the failure while continuing with other messages.

This is a good balance between observability and resilience; malformed or unexpected messages no longer take down the whole update loop.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

29 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
applications/pipeline_visualization/README.md (1)

140-149: Fix minor grammar: “applications demonstrate”.

Small nit: line 142 should use plural agreement:

-The C++ and Python applications demonstrates a simple Holoscan pipeline with data logging:
+The C++ and Python applications demonstrate a simple Holoscan pipeline with data logging:
applications/pipeline_visualization/visualizer/visualizer_static.py (2)

140-145: Avoid shadowing built‑in id in the loop.

Minor style/readability nit: the loop variable id shadows Python’s built‑in id() function. Renaming improves clarity:

-                unique_id = message.UniqueId().decode()
-                for i, id in enumerate(self._unique_ids):
+                unique_id = message.UniqueId().decode()
+                for i, uid in enumerate(self._unique_ids):
                     # the python application is using tensor maps where the unique_id also contains
                     # the tensor name, so we need to check if the unique_id starts with the id
-                    if unique_id.startswith(id):
+                    if unique_id.startswith(uid):
                         index = i
                         break

186-190: Consider binding the Dash server to localhost by default.

The server currently listens on 0.0.0.0:8050, which is convenient for remote access but exposes the UI on all interfaces. For tighter defaults, you might consider:

-        self._app.run(debug=False, host="0.0.0.0", port=8050)
+        self._app.run(debug=False, host="127.0.0.1", port=8050)

and document how to change host to 0.0.0.0 when remote access is desired. If you expect this to run only in controlled environments (e.g., inside a container), the current setting may be acceptable.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c3a0f90 and 758b6f2.

📒 Files selected for processing (2)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
{operators,applications,workflows,gxf_extensions,tutorials}/**/README.md

📄 CodeRabbit inference engine (CONTRIBUTING.md)

README.md must include purpose, usage instructions, requirements, examples, and architecture overview

Files:

  • applications/pipeline_visualization/README.md
**/*.md

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Use Holoscan SDK glossary terms when referring to HoloHub-specific locations in documentation

Files:

  • applications/pipeline_visualization/README.md
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}: All code must adhere to Holoscan SDK coding standards including proper error handling and validation
Use descriptive English naming for functions, variables, and components; minimize acronyms and brand names
Include inline comments for complex logic in code

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python code must pass linting checks via ./holohub lint

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp,cmake}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

All code must compile and build successfully on target platforms before submission

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.150Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.
📚 Learning: 2025-11-24T16:28:06.280Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.280Z
Learning: Applies to {operators,applications,workflows,gxf_extensions,tutorials}/**/README.md : README.md must include purpose, usage instructions, requirements, examples, and architecture overview

Applied to files:

  • applications/pipeline_visualization/README.md
📚 Learning: 2025-11-17T21:37:54.245Z
Learnt from: bhashemian
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/README.md:135-137
Timestamp: 2025-11-17T21:37:54.245Z
Learning: In the Holohub repository, when both Python and C++ applications are available for the same application, the Holohub CLI defaults to Python if the `--language` argument is not provided. Users can explicitly specify `--language cpp` or `--language python` to select the desired implementation.

Applied to files:

  • applications/pipeline_visualization/README.md
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/README.md
🧬 Code graph analysis (1)
applications/pipeline_visualization/visualizer/visualizer_static.py (3)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1)
  • tensor_to_numpy (25-93)
applications/pipeline_visualization/visualizer/graph_components.py (1)
  • create_app_layout (143-215)
applications/pipeline_visualization/visualizer/nats_async.py (5)
  • NatsAsync (73-192)
  • subscribe (90-91)
  • unsubscribe (93-94)
  • get_message (85-88)
  • shutdown (99-100)
🪛 Ruff (0.14.6)
applications/pipeline_visualization/visualizer/visualizer_static.py

190-190: Possible binding to all interfaces

(S104)

🔇 Additional comments (2)
applications/pipeline_visualization/README.md (1)

1-190: README structure and coverage look solid.

This README clearly explains purpose, quick-start usage, prerequisites, Python visualizer dependencies, and includes an architecture diagram plus examples/customization guidance. It satisfies the required “purpose, usage instructions, requirements, examples, and architecture overview” expectations for application READMEs.

As per coding guidelines, this is in good shape.

applications/pipeline_visualization/visualizer/visualizer_static.py (1)

33-177: Static visualizer logic and error handling look robust.

The overall design—predefined _unique_ids, draining the NATS queue each interval, filtering on payload type, safely handling unknown IDs, and wrapping tensor conversion in try/except—gives a resilient callback that won’t crash the Dash app on malformed or unexpected messages. This aligns well with the intended static visualizer behavior.

AndreasHeumann and others added 10 commits December 1, 2025 11:18
This example demonstrates real-time visualization of data from
Holoscan applications using NATS messaging and web-based dashboards.
It showcases how to stream tensor data from a Holoscan pipeline
and visualize it dynamically in a browser.

Signed-off-by: Andreas Heumann <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Signed-off-by: B Hashemian <[email protected]>
Update the README and mention that Python dependencies need to be
installed before launching the visualizer.

Signed-off-by: Andreas Heumann <[email protected]>
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

29 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
applications/pipeline_visualization/python/CMakeLists.txt (1)

23-25: Redundant linking already flagged in previous review.

This explicit target_link_libraries call is redundant since pybind11_add_holohub_module already handles linking to the C++ target internally.

Based on learnings.

applications/pipeline_visualization/python/nats_logger_pybind.cpp (1)

58-58: ** Verify the correct default for large_data_worker_sleep_time.**

The previous review flagged a mismatch where the constructor had 200000 and the binding had 50000. The current code shows both at 50000, but it's unclear whether this is the semantically correct default. Please confirm that 50000 microseconds (50ms) is the intended worker sleep time for the large data queue, or if 200000 microseconds (200ms) should be used instead.

🧹 Nitpick comments (3)
applications/pipeline_visualization/Dockerfile (2)

37-54: Clarify the intent of purging python3-pip when python3 is not installed.

Line 45-46 purges python3-pip inside the conditional block that only executes when python3 is NOT installed. This logic seems backwards—if python3 isn't present, python3-pip shouldn't exist or be relevant to remove. Consider restructuring the conditional or clarifying the intent with a comment.


55-58: PIP_BREAK_SYSTEM_PACKAGES=1 bypasses pip's system-package safety checks.

Setting PIP_BREAK_SYSTEM_PACKAGES=1 globally allows pip to install packages into the system Python, which can cause version conflicts and is discouraged in general. However, in a Docker context where virtual environments aren't practical, this is an acceptable trade-off. Ensure that downstream applications tolerate the system-wide package installation.

applications/pipeline_visualization/visualizer/visualizer_static.py (1)

187-190: Consider making network bindings configurable.

The visualizer binds to all interfaces (0.0.0.0), which exposes the dashboard and NATS connection to the network. While this may be intentional for remote visualization, consider making the host configurable via environment variables or command-line arguments to allow deployment flexibility and improved security posture.

Example enhancement:

import os

# In __init__ or run:
nats_host = os.getenv("NATS_HOST", "127.0.0.1:4222")
dash_host = os.getenv("DASH_HOST", "127.0.0.1")
dash_port = int(os.getenv("DASH_PORT", "8050"))

self._nats_inst = NatsAsync(host=nats_host)
self._app.run(debug=False, host=dash_host, port=dash_port)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 758b6f2 and 99a0884.

⛔ Files ignored due to path filters (1)
  • applications/pipeline_visualization/pipeline_visualization.png is excluded by !**/*.png
📒 Files selected for processing (31)
  • .vscode/launch.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • applications/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/Dockerfile (1 hunks)
  • applications/pipeline_visualization/README.md (1 hunks)
  • applications/pipeline_visualization/cpp/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/create_tensor.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/metadata.json (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/nats_logger.hpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp (1 hunks)
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/CMakeLists.txt (1 hunks)
  • applications/pipeline_visualization/python/metadata.json (1 hunks)
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.py (1 hunks)
  • applications/pipeline_visualization/python/pipeline_visualization.yaml (1 hunks)
  • applications/pipeline_visualization/python/pydoc.hpp (1 hunks)
  • applications/pipeline_visualization/requirements.txt (1 hunks)
  • applications/pipeline_visualization/schemas/message.fbs (1 hunks)
  • applications/pipeline_visualization/schemas/tensor.fbs (1 hunks)
  • applications/pipeline_visualization/start_nats_server.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/graph_components.py (1 hunks)
  • applications/pipeline_visualization/visualizer/nats_async.py (1 hunks)
  • applications/pipeline_visualization/visualizer/start_visualizer.sh (1 hunks)
  • applications/pipeline_visualization/visualizer/styles.py (1 hunks)
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py (1 hunks)
  • applications/pipeline_visualization/visualizer/visualizer_static.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (15)
  • .vscode/launch.json
  • applications/pipeline_visualization/requirements.txt
  • applications/CMakeLists.txt
  • applications/pipeline_visualization/start_nats_server.sh
  • applications/pipeline_visualization/schemas/message.fbs
  • applications/pipeline_visualization/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.yaml
  • applications/pipeline_visualization/visualizer/start_visualizer.sh
  • applications/pipeline_visualization/schemas/tensor.fbs
  • applications/pipeline_visualization/python/pipeline_visualization.yaml
  • applications/pipeline_visualization/visualizer/styles.py
  • applications/pipeline_visualization/README.md
  • applications/pipeline_visualization/visualizer/nats_async.py
  • applications/pipeline_visualization/cpp/CMakeLists.txt
  • .vscode/tasks.json
🧰 Additional context used
📓 Path-based instructions (6)
{applications,workflows}/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

CMakeLists.txt must include build system integration using add_holohub_application() for applications and workflows

Files:

  • applications/pipeline_visualization/python/CMakeLists.txt
applications/**/CMakeLists.txt

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Applications should include a testing section in CMakeLists.txt for functional testing using CTest

Files:

  • applications/pipeline_visualization/python/CMakeLists.txt
**/metadata.json

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

When reviewing PRs that modify or add metadata.json files, verify that the category (first tag in the tags array) matches one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

**/metadata.json: Use interface or specific metadata schema for defining contribution structure (metadata.json)
metadata.json must include name, authors, language, version, changelog, holoscan_sdk info, platforms, tags, ranking, dependencies, and run command
metadata.json ranking levels must be self-assessed from 0 (production-ready) to 5 (deprecated) with corresponding requirements
Dependencies must be correctly specified in metadata.json dependencies section

Files:

  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/cpp/metadata.json
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp}: All code must adhere to Holoscan SDK coding standards including proper error handling and validation
Use descriptive English naming for functions, variables, and components; minimize acronyms and brand names
Include inline comments for complex logic in code

Files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
  • applications/pipeline_visualization/python/pydoc.hpp
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp
  • applications/pipeline_visualization/visualizer/visualizer_static.py
  • applications/pipeline_visualization/python/pipeline_visualization.py
  • applications/pipeline_visualization/cpp/create_tensor.cpp
  • applications/pipeline_visualization/visualizer/graph_components.py
  • applications/pipeline_visualization/cpp/nats_logger.hpp
  • applications/pipeline_visualization/cpp/nats_logger.cpp
  • applications/pipeline_visualization/cpp/create_tensor.hpp
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py
{operators,applications,workflows,gxf_extensions}/**/*.{py,cpp,hpp,cmake}

📄 CodeRabbit inference engine (CONTRIBUTING.md)

All code must compile and build successfully on target platforms before submission

Files:

  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
  • applications/pipeline_visualization/python/pydoc.hpp
  • applications/pipeline_visualization/python/nats_logger_pybind.cpp
  • applications/pipeline_visualization/visualizer/visualizer_static.py
  • applications/pipeline_visualization/python/pipeline_visualization.py
  • applications/pipeline_visualization/cpp/create_tensor.cpp
  • applications/pipeline_visualization/visualizer/graph_components.py
  • applications/pipeline_visualization/cpp/nats_logger.hpp
  • applications/pipeline_visualization/cpp/nats_logger.cpp
  • applications/pipeline_visualization/cpp/create_tensor.hpp
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py
**/*.py

📄 CodeRabbit inference engine (CONTRIBUTING.md)

Python code must pass linting checks via ./holohub lint

Files:

  • applications/pipeline_visualization/visualizer/visualizer_static.py
  • applications/pipeline_visualization/python/pipeline_visualization.py
  • applications/pipeline_visualization/visualizer/graph_components.py
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
  • applications/pipeline_visualization/visualizer/visualizer_dynamic.py
🧠 Learnings (13)
📓 Common learnings
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.150Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.
📚 Learning: 2025-10-22T16:53:45.393Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: operators/video_streaming/streaming_client_enhanced/python/CMakeLists.txt:16-24
Timestamp: 2025-10-22T16:53:45.393Z
Learning: The pybind11_add_holohub_module CMake macro in cmake/pybind11_add_holohub_module.cmake encapsulates all pybind11 setup internally, including finding pybind11, linking against holoscan::pybind11 conditionally, and linking the C++ operator target. Operator Python bindings in holohub should only call this macro without additional pybind11 setup.

Applied to files:

  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-11-24T16:28:06.280Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.280Z
Learning: Applies to {applications,workflows}/**/CMakeLists.txt : CMakeLists.txt must include build system integration using add_holohub_application() for applications and workflows

Applied to files:

  • applications/pipeline_visualization/python/CMakeLists.txt
📚 Learning: 2025-11-24T16:28:06.280Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.280Z
Learning: Applies to operators/**/CMakeLists.txt : CMakeLists.txt must include build system integration using add_holohub_operator() for operators

Applied to files:

  • applications/pipeline_visualization/python/CMakeLists.txt
  • applications/pipeline_visualization/cpp/pipeline_visualization.cpp
📚 Learning: 2025-11-24T16:28:06.281Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.281Z
Learning: Applies to **/metadata.json : metadata.json must include name, authors, language, version, changelog, holoscan_sdk info, platforms, tags, ranking, dependencies, and run command

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-11-24T16:27:43.600Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T16:27:43.600Z
Learning: Applies to **/metadata.json : When reviewing PRs that modify or add `metadata.json` files, verify that the **category** (first tag in the `tags` array) matches one of the approved categories: Benchmarking, Camera, Computer Vision and Perception, Converter, Deployment, Development, Extended Reality, Healthcare AI, Image Processing, Inference, Interoperability, Medical Imaging, Natural Language and Conversational AI, Networking and Distributed Computing, Optimization, Quantum Computing, Rendering, Robotics, Scheduler, Signal Processing, Streaming, Threading, Video, Video Capture, Visualization, XR

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-30T19:04:41.239Z
Learnt from: bhashemian
Repo: nvidia-holoscan/holohub PR: 1203
File: applications/video_streaming/video_streaming_server/cpp/metadata.json:42-51
Timestamp: 2025-10-30T19:04:41.239Z
Learning: In the video_streaming application structure, umbrella metadata (applications/video_streaming/metadata.json) uses language-qualified mode names like "server_cpp" and "server_python" to distinguish between implementations, while component-specific metadata files (e.g., applications/video_streaming/video_streaming_server/cpp/metadata.json) use simple mode names like "server" since they are already scoped to a specific language implementation by their directory structure.

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-11-11T15:40:28.150Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.hpp:21-22
Timestamp: 2025-11-11T15:40:28.150Z
Learning: In the pipeline_visualization application, FlatBuffers generated headers like tensor_generated.h can be included using the simplified path `#include <flatbuffers/tensor_generated.h>` because the CMake target pipeline_visualization_flatbuffers_schemas generates them and pipeline_visualization_flatbuffers_target exposes the correct include directories, which are inherited by targets that link against it.

Applied to files:

  • applications/pipeline_visualization/python/metadata.json
  • applications/pipeline_visualization/cpp/create_tensor.cpp
  • applications/pipeline_visualization/cpp/create_tensor.hpp
  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
📚 Learning: 2025-11-10T20:29:02.443Z
Learnt from: bhashemian
Repo: nvidia-holoscan/holohub PR: 1226
File: utilities/cli/util.py:32-32
Timestamp: 2025-11-10T20:29:02.443Z
Learning: In the holohub repository, the `DEFAULT_BASE_SDK_VERSION` constant in `utilities/cli/util.py` is a fallback default value. The `tested_versions` fields in application `metadata.json` files (e.g., in `applications/multiai_ultrasound/*/metadata.json`) are maintained independently and do not need to match the `DEFAULT_BASE_SDK_VERSION`. Each application tracks its own tested versions separately.

Applied to files:

  • applications/pipeline_visualization/Dockerfile
📚 Learning: 2025-11-24T16:28:06.281Z
Learnt from: CR
Repo: nvidia-holoscan/holohub PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2025-11-24T16:28:06.281Z
Learning: Applies to applications/**/ : Applications should follow the directory structure with metadata.json, README.md, implementation files, and CMakeLists.txt

Applied to files:

  • applications/pipeline_visualization/cpp/metadata.json
📚 Learning: 2025-10-22T16:33:55.411Z
Learnt from: cdinea
Repo: nvidia-holoscan/holohub PR: 1170
File: applications/video_streaming/video_streaming_client/python/streaming_client_demo_replayer.yaml:27-36
Timestamp: 2025-10-22T16:33:55.411Z
Learning: In the video_streaming bidirectional client applications (applications/video_streaming/video_streaming_client), the pipeline has two separate data paths: (1) Outgoing: source → format_converter → streaming_client INPUT (sends to server), and (2) Incoming: streaming_client OUTPUT → holoviz (receives from server). The format_converter prepares data for transmission and does NOT feed directly into holoviz visualization.

Applied to files:

  • applications/pipeline_visualization/python/pipeline_visualization.py
📚 Learning: 2025-11-17T06:32:45.908Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/cpp/create_tensor.cpp:64-87
Timestamp: 2025-11-17T06:32:45.908Z
Learning: In CUDA, when `cudaMemcpyAsync` is used with pageable (non-pinned) host memory, the operation is synchronous with respect to the host thread because the runtime must prevent memory modification during transfer. Only pinned (page-locked) memory allows truly asynchronous behavior. Therefore, no explicit `cudaStreamSynchronize` is required after `cudaMemcpyAsync` when the destination is pageable host memory.

Applied to files:

  • applications/pipeline_visualization/cpp/create_tensor.cpp
📚 Learning: 2025-11-18T10:22:08.758Z
Learnt from: AndreasHeumann
Repo: nvidia-holoscan/holohub PR: 1220
File: applications/pipeline_visualization/visualizer/tensor_to_numpy.py:72-91
Timestamp: 2025-11-18T10:22:08.758Z
Learning: In applications/pipeline_visualization/visualizer/tensor_to_numpy.py, tensors always use HWC (Height, Width, Channel) layout, so removing the trailing singleton dimension (when it equals 1) is safe and expected behavior for this specific application.

Applied to files:

  • applications/pipeline_visualization/visualizer/tensor_to_numpy.py
🧬 Code graph analysis (9)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.cpp (6)
  • initialize (312-320)
  • initialize (312-312)
  • initialize (489-491)
  • initialize (489-489)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/cpp/nats_logger.hpp (4)
  • spec (54-54)
  • data (72-76)
  • data (123-127)
  • tensor (89-93)
applications/pipeline_visualization/python/pydoc.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/python/nats_logger_pybind.cpp (2)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • NatsLogger (47-47)
applications/pipeline_visualization/cpp/nats_logger.cpp (2)
  • nats_url (124-149)
  • nats_url (124-124)
applications/pipeline_visualization/python/pipeline_visualization.py (1)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp (14)
  • spec (67-67)
  • spec (67-67)
  • spec (145-148)
  • spec (145-145)
  • spec (217-217)
  • spec (217-217)
  • op_input (150-200)
  • op_input (150-151)
  • op_input (219-225)
  • op_input (219-220)
  • ModulateOp (133-133)
  • TimeSeriesApp (249-254)
  • main (301-402)
  • main (301-301)
applications/pipeline_visualization/cpp/create_tensor.cpp (1)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (58-60)
applications/pipeline_visualization/visualizer/graph_components.py (2)
operators/medical_imaging/core/io_type.py (1)
  • IOType (19-25)
applications/pipeline_visualization/cpp/nats_logger.hpp (2)
  • data (72-76)
  • data (123-127)
applications/pipeline_visualization/cpp/nats_logger.cpp (1)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
  • CreateTensor (58-60)
applications/pipeline_visualization/cpp/create_tensor.hpp (1)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
applications/pipeline_visualization/visualizer/tensor_to_numpy.py (3)
operators/grpc_operators/python/common/tensor_proto.py (1)
  • DLDataTypeCode (46-51)
operators/medical_imaging/core/io_type.py (1)
  • IOType (19-25)
applications/pipeline_visualization/cpp/nats_logger.hpp (1)
  • tensor (89-93)
🪛 Clang (14.0.6)
applications/pipeline_visualization/cpp/pipeline_visualization.cpp

[error] 32-32: 'getopt.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/pydoc.hpp

[error] 21-21: 'macros.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/python/nats_logger_pybind.cpp

[error] 18-18: 'pybind11/pybind11.h' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/nats_logger.hpp

[error] 21-21: 'holoscan/core/component_spec.hpp' file not found

(clang-diagnostic-error)

applications/pipeline_visualization/cpp/create_tensor.hpp

[error] 21-21: 'memory' file not found

(clang-diagnostic-error)

🪛 Ruff (0.14.6)
applications/pipeline_visualization/visualizer/visualizer_static.py

190-190: Possible binding to all interfaces

(S104)

applications/pipeline_visualization/python/pipeline_visualization.py

55-55: Unused method argument: op_input

(ARG002)


55-55: Unused method argument: context

(ARG002)


90-90: Unused method argument: context

(ARG002)


123-123: Unused method argument: op_output

(ARG002)


123-123: Unused method argument: context

(ARG002)

applications/pipeline_visualization/visualizer/tensor_to_numpy.py

56-56: Avoid specifying long messages outside the exception class

(TRY003)


62-62: Avoid specifying long messages outside the exception class

(TRY003)

applications/pipeline_visualization/visualizer/visualizer_dynamic.py

198-198: Possible binding to all interfaces

(S104)

🔇 Additional comments (11)
applications/pipeline_visualization/Dockerfile (2)

16-88: Multi-stage Dockerfile structure is well-organized and appropriate.

The use of multiple build stages (base → holohub-cli-prerequisites → holohub-cli → application-dependencies) is a best practice for keeping image size minimal and dependencies isolated. Conditional Python/pip installation is flexible and handles base images with or without Python pre-installed. The HOLOSCAN_INPUT_PATH environment variable correctly sets a default data directory for the application.


87-88: Verify that requirements.txt exists and contains appropriate dependencies.

Line 87 copies applications/pipeline_visualization/requirements.txt, but this file must be verified to exist in the PR and contain all necessary dependencies for the pipeline_visualization application (e.g., NATS client, Dash visualizer dependencies, etc.).

applications/pipeline_visualization/python/metadata.json (1)

1-44: LGTM!

The metadata.json structure is complete and properly formatted. All required fields are present (name, authors, language, version, changelog, holoscan_sdk, platforms, tags, ranking, requirements, default_mode, modes), and the first tag "Visualization" matches an approved category.

Based on coding guidelines.

applications/pipeline_visualization/cpp/metadata.json (1)

1-44: LGTM!

The C++ metadata.json is properly structured with all required fields. The "Visualization" category is approved, and the configuration appropriately references the compiled binary.

Based on coding guidelines.

applications/pipeline_visualization/python/pydoc.hpp (2)

21-21: Static analysis false positive - macros.hpp is resolved via CMake.

The missing file warning can be safely ignored. The macros.hpp header is part of the Holoscan SDK pybind11 infrastructure and will be available at compile time through the CMake build configuration.


27-118: Excellent documentation!

The PYDOC block is comprehensive and well-structured, covering all parameters, their types, defaults, and the filtering logic in detail. This will provide clear guidance for Python users.

applications/pipeline_visualization/visualizer/tensor_to_numpy.py (2)

43-67: LGTM!

The DLDataTypeCode to NumPy dtype mapping correctly handles signed/unsigned integers and floats, with appropriate validation for bit widths and lane counts. The structured dtype for multi-lane vectors is well-implemented.


72-92: Shape and stride handling looks correct.

The HWC layout assumption is properly documented, and the trailing singleton dimension removal is appropriate for this application. The stride handling correctly materializes the tensor layout when explicit strides are provided.

Based on learnings.

applications/pipeline_visualization/cpp/create_tensor.hpp (1)

18-64: Excellent header design!

The header demonstrates best practices with forward declarations to minimize dependencies, comprehensive documentation covering device types, error behavior, and materialization conditions, and a clean public API surface.

Based on learnings (FlatBuffers include path is correctly configured via CMake).

applications/pipeline_visualization/cpp/nats_logger.hpp (1)

43-188: Well-designed data logger implementation!

The NatsLogger class follows Holoscan SDK patterns with proper use of the resource macros, pimpl idiom for encapsulation, and a clean nested AsyncNatsBackend for asynchronous processing. The API surface clearly overrides all required logging methods with comprehensive documentation.

applications/pipeline_visualization/visualizer/visualizer_static.py (1)

138-155: Robust error handling in place!

The message processing correctly handles unknown unique_ids and tensor conversion failures with logging and graceful continuation, preventing callback crashes while maintaining visibility into issues.

Copy link
Member

@bhashemian bhashemian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I tested it and it works beautifully! Thanks @AndreasHeumann for this interesting application.

@bhashemian bhashemian enabled auto-merge (squash) December 1, 2025 16:46
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

29 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

@bhashemian
Copy link
Member

@coderabbitai resolve all comments

@bhashemian bhashemian merged commit 62d6b40 into nvidia-holoscan:main Dec 1, 2025
11 checks passed
@github-project-automation github-project-automation bot moved this to Done in Holohub Dec 1, 2025
@coderabbitai
Copy link

coderabbitai bot commented Dec 1, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Get the PR review comments made by coderabbitai
echo "=== Listing CodeRabbit comments on PR #1220 ==="
gh pr view 1220 --repo nvidia-holoscan/holohub --json comments --jq '.comments[] | select(.author.login == "coderabbitai[bot]" or .author.login == "coderabbitai") | {id: .id, body: .body[0:100]}'

# Get review thread comments
echo ""
echo "=== Listing CodeRabbit review comments on PR #1220 ==="
gh api /repos/nvidia-holoscan/holohub/pulls/1220/comments --jq '.[] | select(.user.login == "coderabbitai[bot]" or .user.login == "coderabbitai") | {id: .id, body: .body[0:100], path: .path}'

Length of output: 5564


🏁 Script executed:

#!/bin/bash

# Array of comment IDs to resolve
comment_ids=(
  2514522688
  2514522701
  2514522716
  2514522733
  2514522742
  2514522757
  2514522768
  2514522783
  2514522803
  2514522818
  2514522834
  2514522843
  2514522853
  2514522863
  2514522872
  2514522881
  2514586790
  2514586815
  2514586828
  2514586834
  2514586850
  2514712249
  2514737542
  2514737546
  2514737557
  2514737566
)

echo "Resolving ${#comment_ids[@]} CodeRabbit review comments..."
resolved_count=0
failed_count=0

for comment_id in "${comment_ids[@]}"; do
  echo "Resolving comment ID: $comment_id"
  
  # Use GitHub API to resolve the review comment thread
  response=$(gh api --method PATCH \
    -H "Accept: application/vnd.github+json" \
    -H "X-GitHub-Api-Version: 2022-11-28" \
    "/repos/nvidia-holoscan/holohub/pulls/comments/$comment_id" \
    -f resolved=true 2>&1)
  
  if [ $? -eq 0 ]; then
    ((resolved_count++))
    echo "  ✓ Resolved"
  else
    ((failed_count++))
    echo "  ✗ Failed: $response"
  fi
done

echo ""
echo "Summary:"
echo "  Resolved: $resolved_count"
echo "  Failed: $failed_count"
echo "  Total: ${#comment_ids[@]}"

Length of output: 9399


🏁 Script executed:

#!/bin/bash

# Get review threads with their IDs using GraphQL
echo "Fetching review thread IDs..."

# Query to get all review threads for the PR
threads_query='query {
  repository(owner: "nvidia-holoscan", name: "holohub") {
    pullRequest(number: 1220) {
      reviewThreads(first: 100) {
        nodes {
          id
          isResolved
          comments(first: 1) {
            nodes {
              author {
                login
              }
              body
            }
          }
        }
      }
    }
  }
}'

# Execute the query and extract thread IDs for unresolved threads from coderabbitai
thread_ids=$(gh api graphql -f query="$threads_query" --jq '.data.repository.pullRequest.reviewThreads.nodes[] | select(.isResolved == false and (.comments.nodes[0].author.login == "coderabbitai" or .comments.nodes[0].author.login == "coderabbitai[bot]")) | .id')

if [ -z "$thread_ids" ]; then
  echo "No unresolved CodeRabbit review threads found."
  exit 0
fi

echo "Found unresolved CodeRabbit threads. Resolving..."
resolved_count=0
failed_count=0

while IFS= read -r thread_id; do
  if [ -z "$thread_id" ]; then
    continue
  fi
  
  echo "Resolving thread: $thread_id"
  
  # Use GraphQL mutation to resolve the thread
  mutation="mutation {
    resolveReviewThread(input: {threadId: \"$thread_id\"}) {
      thread {
        id
        isResolved
      }
    }
  }"
  
  response=$(gh api graphql -f query="$mutation" 2>&1)
  
  if [ $? -eq 0 ]; then
    ((resolved_count++))
    echo "  ✓ Resolved"
  else
    ((failed_count++))
    echo "  ✗ Failed: $response"
  fi
done <<< "$thread_ids"

echo ""
echo "Summary:"
echo "  Resolved: $resolved_count"
echo "  Failed: $failed_count"

Length of output: 76


🏁 Script executed:

#!/bin/bash

# Check all review threads (both resolved and unresolved) to verify status
echo "Checking all CodeRabbit review threads..."

threads_query='query {
  repository(owner: "nvidia-holoscan", name: "holohub") {
    pullRequest(number: 1220) {
      reviewThreads(first: 100) {
        nodes {
          id
          isResolved
          comments(first: 1) {
            nodes {
              author {
                login
              }
              body
              path
            }
          }
        }
      }
    }
  }
}'

result=$(gh api graphql -f query="$threads_query")

# Count total CodeRabbit threads
total=$(echo "$result" | jq -r '.data.repository.pullRequest.reviewThreads.nodes[] | select(.comments.nodes[0].author.login == "coderabbitai" or .comments.nodes[0].author.login == "coderabbitai[bot]") | .id' | wc -l)

# Count resolved threads
resolved=$(echo "$result" | jq -r '.data.repository.pullRequest.reviewThreads.nodes[] | select(.isResolved == true and (.comments.nodes[0].author.login == "coderabbitai" or .comments.nodes[0].author.login == "coderabbitai[bot]")) | .id' | wc -l)

# Count unresolved threads
unresolved=$(echo "$result" | jq -r '.data.repository.pullRequest.reviewThreads.nodes[] | select(.isResolved == false and (.comments.nodes[0].author.login == "coderabbitai" or .comments.nodes[0].author.login == "coderabbitai[bot]")) | .id' | wc -l)

echo "CodeRabbit Review Thread Status:"
echo "  Total threads: $total"
echo "  Already resolved: $resolved"
echo "  Unresolved: $unresolved"

if [ "$unresolved" -eq 0 ]; then
  echo ""
  echo "✓ All CodeRabbit review threads are already resolved!"
fi

Length of output: 188


@bhashemian, all CodeRabbit review comments on PR #1220 are already resolved! ✓

I've verified the status and found:

  • Total CodeRabbit review threads: 62
  • Already resolved: 62
  • Unresolved: 0

There are no outstanding review comments that need resolution. The PR is clear from a CodeRabbit review perspective.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants