Skip to content

Conversation

@vaspahomov
Copy link
Contributor

@vaspahomov vaspahomov commented Jun 17, 2025

Why we need this PR

Idea is the same as in #151

If we are going to start n parallel NodeMaintenances at the same time. We'll handle last one after n * d time. Where d is duration of reconcile.
In current model the longest reconcile is more then 30s

That means - if we are starting ~50 parallel NodeMaintenances and each of them are failed to finish immediately (some blocking PDB exists) we we'll came up with the last one only after ~30mins.

Here we are trying to lower single reconcile maximum duration.

Changes made

Made DrainerTimeout configurable.

Which issue(s) this PR fixes

Test plan

Summary by CodeRabbit

  • New Features

    • Added a configurable command-line flag to set the drainer timeout duration.
    • Made the drainer timeout dynamically adjustable per instance, rather than using a fixed value.
  • Tests

    • Updated test deployment to use the default drainer timeout for pod termination grace period.

@openshift-ci openshift-ci bot requested review from clobrano and mshitrit June 17, 2025 18:07
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 17, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: vaspahomov
Once this PR has been reviewed and has the lgtm label, please assign slintes for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link

coderabbitai bot commented Jun 17, 2025

Walkthrough

A new configurable drainer timeout is introduced for the NodeMaintenanceReconciler, replacing the previously hardcoded value. This timeout can now be set via a command-line flag and is passed through the main application, controller, and tests. Associated struct fields, function signatures, and test logic are updated to use this configurable value.

Changes

File(s) Change Summary
controllers/nodemaintenance_controller.go, controllers/controllers_suite_test.go Added DrainerTimeout field to NodeMaintenanceReconciler; made drainer timeout configurable; updated function signatures and logic to use the new field; initialized field in test setup.
main.go Added drainer-timeout command-line flag; validated and logged flag value; passed value to NodeMaintenanceReconciler; updated imports and formatting.
test/e2e/node_maintenance_test.go Updated test deployment to use DefaultDrainerTimeout for pod termination grace period calculation.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Main
    participant NodeMaintenanceReconciler
    participant Drainer

    User->>Main: Start with --drainer-timeout flag
    Main->>NodeMaintenanceReconciler: Initialize with DrainerTimeout
    NodeMaintenanceReconciler->>Drainer: createDrainer(..., DrainerTimeout)
    Drainer-->>NodeMaintenanceReconciler: Drainer with configured timeout
Loading

Poem

In the warren where code runs deep,
A drainer timeout now can leap!
No more constants, set and done—
Flags and fields for everyone.
With each deploy and test anew,
The rabbits cheer: "Configurable, woohoo!"
🐇⏱️


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1ef107c and e2712c8.

📒 Files selected for processing (2)
  • controllers/controllers_suite_test.go (1 hunks)
  • main.go (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • controllers/controllers_suite_test.go
  • main.go
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 17, 2025

Hi @vaspahomov. Thanks for your PR.

I'm waiting for a medik8s member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
controllers/controllers_suite_test.go (1)

90-105: Compilation will break – createDrainer call missing the new timeout argument

createDrainer now requires three parameters (ctx, cfg, timeout) but the test suite still calls it with only two.
This will prevent the test package (and the whole build) from compiling.

-	drainer, err = createDrainer(ctx, cfg)
+	// use the same timeout that the reconciler under test is configured with
+	drainer, err = createDrainer(ctx, cfg, r.DrainerTimeout)
♻️ Duplicate comments (1)
controllers/nodemaintenance_controller.go (1)

126-129: Error handling path depends on valid timeout

Because the timeout is injected here, any invalid (≤ 0) value slips straight into the helper and disables the safety net.
This ties back to the validation suggestion above.

🧹 Nitpick comments (3)
controllers/nodemaintenance_controller.go (1)

71-72: Zero/negative timeout leaves drain running indefinitely – validate DrainerTimeout

A value ≤ 0 disables the timeout according to kubectl/drain.Helper semantics.
Unless you explicitly want to allow “infinite” drains, add a guard that falls back to DefaultDrainerTimeout and log a warning when an invalid value is provided.

+	if r.DrainerTimeout <= 0 {
+		r.logger.Info("Invalid drainer timeout requested – falling back to default",
+			"requested", r.DrainerTimeout, "default", DefaultDrainerTimeout)
+		r.DrainerTimeout = DefaultDrainerTimeout
+	}
main.go (1)

84-86: Add basic flag validation for --drainer-timeout

Users can now set --drainer-timeout=0 or a negative duration, unintentionally disabling the timeout.
Fail fast (or at least warn) during startup:

flag.DurationVar(&drainerTimeout, "drainer-timeout", controllers.DefaultDrainerTimeout, "Timeout for draining a node.")

+flag.Parse()
+
+if drainerTimeout <= 0 {
+	setupLog.Error(fmt.Errorf("invalid drainer-timeout"), "timeout must be > 0")
+	os.Exit(1)
+}
-
-flag.Parse()
test/e2e/node_maintenance_test.go (1)

345-346: Minor readability nit – cast after addition

The current expression is correct, but casting the whole sum avoids precedence surprises and intent is clearer:

-	TerminationGracePeriodSeconds: ptr.To[int64](int64(nodemaintenance.DefaultDrainerTimeout.Seconds()) + 50),
+	TerminationGracePeriodSeconds: ptr.To[int64](int64(nodemaintenance.DefaultDrainerTimeout.Seconds()+50)),
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9f1d95d and 1ef107c.

📒 Files selected for processing (4)
  • controllers/controllers_suite_test.go (1 hunks)
  • controllers/nodemaintenance_controller.go (5 hunks)
  • main.go (5 hunks)
  • test/e2e/node_maintenance_test.go (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
test/e2e/node_maintenance_test.go (1)
controllers/nodemaintenance_controller.go (1)
  • DefaultDrainerTimeout (59-59)
controllers/controllers_suite_test.go (1)
controllers/nodemaintenance_controller.go (1)
  • DefaultDrainerTimeout (59-59)
controllers/nodemaintenance_controller.go (1)
vendor/k8s.io/kubectl/pkg/drain/drain.go (1)
  • Helper (51-96)

@razo7
Copy link
Member

razo7 commented Jun 18, 2025

IIUC, your suggestion follows the general line of #122

Copy link
Member

@slintes slintes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty straight forward, works for me.

@mshitrit thoughts?

"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&enableHTTP2, "enable-http2", false, "If HTTP/2 should be enabled for the metrics and webhook servers.")
flag.DurationVar(&drainerTimeout, "drainer-timeout", controllers.DefaultDrainerTimeout, "Timeout for draining a node.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense to check for reasonable values?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not relevant to this PR, but this looks to me like something that would make sense to put in a configuration not ?
Would also simplify values validation.

Signed-off-by: vaspahomov <[email protected]>
@mshitrit
Copy link
Member

Idea is the same as in #151

@vaspahomov is this PR replacing #151 ?

@vaspahomov
Copy link
Contributor Author

Idea is the same as in #151
@vaspahomov is this PR replacing #151 ?

It's better to have them both. Both PRs are related to speeding up simultaneous NodeMaintenances handling.

@razo7
Copy link
Member

razo7 commented Jul 6, 2025

/test 4.18-openshift-e2e

@slintes
Copy link
Member

slintes commented Oct 23, 2025

@vaspahomov Hi, how do you install NMO, with OLM? I'm wondering how to use such new flags without further modifications the the bundle? 🤔 We can set env vars in the Subscription, but I'm not aware how to modify the command...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants