Skip to content

Conversation

@alyssacgoins
Copy link

@alyssacgoins alyssacgoins commented Nov 25, 2025

Description of your changes:
Cherry-pick 'configure ml-pipeline service address within driver.go' from upstream KFP.

Checklist:

Summary by CodeRabbit

  • New Features
    • Added CLI flags to configure ML Pipeline server address and port.
    • Driver now propagates the configured ML Pipeline endpoint into launched workloads so runtime components use the specified endpoint.
  • Tests
    • Added/updated tests to verify the ML Pipeline address and port are included in the launcher configuration.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 25, 2025

Walkthrough

Adds explicit ML pipeline server address and port fields to the driver Options, wires them from CLI flags through driver initialization into pod spec patching, and updates tests to assert the launcher command includes the new --ml_pipeline_server_address and --ml_pipeline_server_port flags.

Changes

Cohort / File(s) Summary
CLI flag capture
backend/src/v2/cmd/driver/main.go
Added MLPipelineServerAddress and MLPipelineServerPort to the driver.Options initialization, populated from new CLI flags.
Driver configuration & pod spec initialization
backend/src/v2/driver/driver.go
Added MLPipelineServerAddress and MLPipelineServerPort fields to Options. Updated initPodSpecPatch signature to accept mlPipelineServerAddress and mlPipelineServerPort. Appended --ml_pipeline_server_address and --ml_pipeline_server_port flags to the launcher command.
Container driver
backend/src/v2/driver/container.go
Updated call to initPodSpecPatch to pass MLPipelineServerAddress and MLPipelineServerPort from opts.
Driver tests
backend/src/v2/driver/driver_test.go
Updated test invocations to pass ml-pipeline.kubeflow and 8887. Added Test_initPodSpecPatch_mlPipelineServerConfig to verify launcher command contains the new flags and values.

Sequence Diagram

sequenceDiagram
    participant CLI as CLI Flags
    participant Main as main.go
    participant Options as Options Struct
    participant Driver as driver.go
    participant Container as container.go
    participant PodSpec as Pod Spec Patch
    participant Launcher as Launcher Command

    CLI->>Main: --ml_pipeline_server_address / --ml_pipeline_server_port
    Main->>Options: set MLPipelineServerAddress, MLPipelineServerPort
    Options->>Container: pass Options
    Container->>Driver: call initPodSpecPatch(..., address, port)
    Driver->>PodSpec: initPodSpecPatch receives address, port
    PodSpec->>Launcher: add --ml_pipeline_server_address / --ml_pipeline_server_port
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Inspect main.go, driver.go, and container.go for consistent types and nil/empty handling of the new fields.
  • Verify launcher command assembly includes the flags exactly as expected.
  • Review new and updated tests in driver_test.go for correctness and coverage.

Poem

🐰 I hopped through flags at morning's start,
Two little fields I tucked in my cart,
From CLI to launcher they now stream,
Ports and addresses join the team,
Tests nod kindly — onward we part.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 15.79% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: cherry-picking an upstream feature to configure ML pipeline service address within driver.go.
Description check ✅ Passed The description follows the template structure with all required sections completed: change description, signed-off confirmation, title convention verification, and DSPO testing checklist.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7709a95 and 10689e6.

📒 Files selected for processing (4)
  • backend/src/v2/cmd/driver/main.go (1 hunks)
  • backend/src/v2/driver/container.go (1 hunks)
  • backend/src/v2/driver/driver.go (3 hunks)
  • backend/src/v2/driver/driver_test.go (16 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/src/v2/cmd/driver/main.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (43)
  • GitHub Check: build / image-build-with-cache (scheduledworkflow, backend/Dockerfile.scheduledworkflow, .)
  • GitHub Check: build / image-build-with-cache (frontend, frontend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (apiserver, backend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (driver, backend/Dockerfile.driver, .)
  • GitHub Check: build / image-build-with-cache (persistenceagent, backend/Dockerfile.persistenceagent, .)
  • GitHub Check: build / image-build-with-cache (launcher, backend/Dockerfile.launcher, .)
  • GitHub Check: build / image-build-with-cache (frontend, frontend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (apiserver, backend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (driver, backend/Dockerfile.driver, .)
  • GitHub Check: build / image-build-with-cache (launcher, backend/Dockerfile.launcher, .)
  • GitHub Check: build / image-build-with-cache (scheduledworkflow, backend/Dockerfile.scheduledworkflow, .)
  • GitHub Check: build / image-build-with-cache (frontend, frontend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (frontend, frontend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (apiserver, backend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (launcher, backend/Dockerfile.launcher, .)
  • GitHub Check: build / image-build-with-cache (driver, backend/Dockerfile.driver, .)
  • GitHub Check: build / image-build-with-cache (persistenceagent, backend/Dockerfile.persistenceagent, .)
  • GitHub Check: build / image-build-with-cache (scheduledworkflow, backend/Dockerfile.scheduledworkflow, .)
  • GitHub Check: build / image-build-with-cache (launcher, backend/Dockerfile.launcher, .)
  • GitHub Check: build / image-build-with-cache (driver, backend/Dockerfile.driver, .)
  • GitHub Check: build / image-build-with-cache (persistenceagent, backend/Dockerfile.persistenceagent, .)
  • GitHub Check: build / image-build-with-cache (apiserver, backend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (scheduledworkflow, backend/Dockerfile.scheduledworkflow, .)
  • GitHub Check: build / image-build-with-cache (driver, backend/Dockerfile.driver, .)
  • GitHub Check: build / image-build-with-cache (driver, backend/Dockerfile.driver, .)
  • GitHub Check: build / image-build-with-cache (frontend, frontend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (frontend, frontend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (persistenceagent, backend/Dockerfile.persistenceagent, .)
  • GitHub Check: build / image-build-with-cache (launcher, backend/Dockerfile.launcher, .)
  • GitHub Check: build / image-build-with-cache (apiserver, backend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (apiserver, backend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (launcher, backend/Dockerfile.launcher, .)
  • GitHub Check: build / image-build-with-cache (scheduledworkflow, backend/Dockerfile.scheduledworkflow, .)
  • GitHub Check: build / image-build-with-cache (scheduledworkflow, backend/Dockerfile.scheduledworkflow, .)
  • GitHub Check: build / image-build-with-cache (scheduledworkflow, backend/Dockerfile.scheduledworkflow, .)
  • GitHub Check: build / image-build-with-cache (frontend, frontend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (persistenceagent, backend/Dockerfile.persistenceagent, .)
  • GitHub Check: build / image-build-with-cache (apiserver, backend/Dockerfile, .)
  • GitHub Check: build / image-build-with-cache (launcher, backend/Dockerfile.launcher, .)
  • GitHub Check: build / image-build-with-cache (persistenceagent, backend/Dockerfile.persistenceagent, .)
  • GitHub Check: build / image-build-with-cache (driver, backend/Dockerfile.driver, .)
  • GitHub Check: backend-tests
  • GitHub Check: check_ci_status
🔇 Additional comments (6)
backend/src/v2/driver/container.go (1)

234-235: LGTM!

The new ML pipeline server configuration parameters are correctly wired through from the Options struct to initPodSpecPatch, maintaining consistency with the existing MLMD server configuration pattern.

backend/src/v2/driver/driver_test.go (2)

283-284: LGTM!

All existing test invocations of initPodSpecPatch have been consistently updated to include the new ML pipeline server address and port parameters with reasonable default values.

Also applies to: 406-407, 460-461, 516-517, 567-568, 698-699, 760-761, 828-829, 1032-1033, 1149-1149, 1192-1192, 1259-1260, 1310-1311, 1392-1393, 1495-1496


1559-1605: LGTM!

The new test properly validates that custom ML pipeline server address and port configuration values are correctly propagated to the launcher command via the --ml_pipeline_server_address and --ml_pipeline_server_port flags.

backend/src/v2/driver/driver.go (3)

81-83: LGTM!

The new ML pipeline server configuration fields are appropriately typed and positioned in the Options struct. The lack of validation is consistent with the existing pattern for similar fields like MLMDServerAddress and MLMDServerPort.


242-243: LGTM!

The function signature update correctly adds the ML pipeline server configuration parameters in a logical position, maintaining consistency with the existing MLMD server parameter pattern.


286-287: LGTM!

The launcher command correctly includes the new ML pipeline server configuration flags with proper naming conventions and logical positioning before the MLMD server flags.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dsp-developers
Copy link

A set of new images have been built to help with testing out this PR:
API Server: quay.io/opendatahub/ds-pipelines-api-server:pr-237
DSP DRIVER: quay.io/opendatahub/ds-pipelines-driver:pr-237
DSP LAUNCHER: quay.io/opendatahub/ds-pipelines-launcher:pr-237
Persistence Agent: quay.io/opendatahub/ds-pipelines-persistenceagent:pr-237
Scheduled Workflow Manager: quay.io/opendatahub/ds-pipelines-scheduledworkflow:pr-237
MLMD Server: quay.io/opendatahub/mlmd-grpc-server:latest
MLMD Envoy Proxy: registry.redhat.io/openshift-service-mesh/proxyv2-rhel8:2.3.9-2
UI: quay.io/opendatahub/ds-pipelines-frontend:pr-237
TESTS: quay.io/opendatahub/ds-pipelines-tests:pr-237

@dsp-developers
Copy link

An OCP cluster where you are logged in as cluster admin is required.

The Data Science Pipelines team recommends testing this using the Data Science Pipelines Operator. Check here for more information on using the DSPO.

To use and deploy a DSP stack with these images (assuming the DSPO is deployed), first save the following YAML to a file named dspa.pr-237.yaml:

apiVersion: datasciencepipelinesapplications.opendatahub.io/v1
kind: DataSciencePipelinesApplication
metadata:
  name: pr-237
spec:
  dspVersion: v2
  apiServer:
    image: "quay.io/opendatahub/ds-pipelines-api-server:pr-237"
    argoDriverImage: "quay.io/opendatahub/ds-pipelines-driver:pr-237"
    argoLauncherImage: "quay.io/opendatahub/ds-pipelines-launcher:pr-237"
  persistenceAgent:
    image: "quay.io/opendatahub/ds-pipelines-persistenceagent:pr-237"
  scheduledWorkflow:
    image: "quay.io/opendatahub/ds-pipelines-scheduledworkflow:pr-237"
  mlmd:  
    deploy: true  # Optional component
    grpc:
      image: "quay.io/opendatahub/mlmd-grpc-server:latest"
    envoy:
      image: "registry.redhat.io/openshift-service-mesh/proxyv2-rhel8:2.3.9-2"
  mlpipelineUI:
    deploy: true  # Optional component 
    image: "quay.io/opendatahub/ds-pipelines-frontend:pr-237"
  objectStorage:
    minio:
      deploy: true
      image: 'quay.io/opendatahub/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance'

Then run the following:

cd $(mktemp -d)
git clone [email protected]:opendatahub-io/data-science-pipelines.git
cd data-science-pipelines/
git fetch origin pull/237/head
git checkout -b pullrequest 7709a959f7db994a98ad31530f6a26f2c08ea538
oc apply -f dspa.pr-237.yaml

More instructions here on how to deploy and test a Data Science Pipelines Application.

@alyssacgoins alyssacgoins force-pushed the cherry-pick-server-address-update branch from 7709a95 to 10689e6 Compare November 25, 2025 20:38
@dsp-developers
Copy link

Change to PR detected. A new PR build was completed.
A set of new images have been built to help with testing out this PR:
API Server: quay.io/opendatahub/ds-pipelines-api-server:pr-237
DSP DRIVER: quay.io/opendatahub/ds-pipelines-driver:pr-237
DSP LAUNCHER: quay.io/opendatahub/ds-pipelines-launcher:pr-237
Persistence Agent: quay.io/opendatahub/ds-pipelines-persistenceagent:pr-237
Scheduled Workflow Manager: quay.io/opendatahub/ds-pipelines-scheduledworkflow:pr-237
MLMD Server: quay.io/opendatahub/mlmd-grpc-server:latest
MLMD Envoy Proxy: registry.redhat.io/openshift-service-mesh/proxyv2-rhel8:2.3.9-2
UI: quay.io/opendatahub/ds-pipelines-frontend:pr-237
TESTS: quay.io/opendatahub/ds-pipelines-tests:pr-237

@VaniHaripriya
Copy link

/lgtm

Copy link

@hbelmiro hbelmiro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@openshift-ci
Copy link

openshift-ci bot commented Nov 26, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hbelmiro

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants