Skip to content

add runbooks for new alerts#363

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
yati1998:odfhealthscore
Jan 28, 2026
Merged

add runbooks for new alerts#363
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
yati1998:odfhealthscore

Conversation

@yati1998
Copy link
Contributor

there are new alerts introduced for odf
health score calculation. This commit adds
runbooks for each of them

@yati1998
Copy link
Contributor Author

@aruniiird @weirdwiz please do have a look.

Copy link
Contributor

@weirdwiz weirdwiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs some changes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no mitigation section? please add mitigation steps,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure of what mitigation steps should be added here, so I left it empty for now!!
@weirdwiz if you have any suggestions, we can discuss offline.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mitigation for this is to either move workloads to other storage systems or (preferred) add more disks.
Ceph is one of the few storage systems that grows IO performance linearly with capacity... so more disks = more performance

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, add mitigation steps

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MTU runbook should mention how to verify jumbo frames work end-to-end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about this, maybe we can work on it once you are back.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can find many "Jumbo Frame" test instructions on the internet - for example this one:
https://blah.cloud/networks/test-jumbo-frames-working/

In the end you use ping with a certain icmp size (which different for the different OSs) and you tell the network stack not to fragment the package (but send it whole).

As a mitigation, customers need to ensure the node network interfaces are configured for 9000 bytes AND that all switches in between the nodes also support 9000 bytes on their ports.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

existing runbooks reference shared helper documents like:

  • helpers/podDebug.md
  • helpers/troubleshootCeph.md
  • helpers/gatherLogs.md
  • helpers/networkConnectivity.md

the new runbooks embed all commands inline instead of referencing these. consider using helper links for consistency and maintainability.

ping <node-internal-ip>
```
4. Use mtr or traceroute to analyze path and hops.
5. Verify if the node is under high CPU or network load:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
5. Verify if the node is under high CPU or network load:
5. Verify if the node is under high CPU or network load:
oc debug node/<node>
top -b -n 1 | head -20
sar -u 1 5

sar -n DEV 1 5
```
3. Use Prometheus to graph:
```prompql
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
```prompql
```promql

@yati1998
Copy link
Contributor Author

@weirdwiz updated the PR except for the 2 comments, we can work on them once you are back.


## Impact

* Brief service interruption (e.g., MON restart may cause quorum re-election).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

service interruption sounds worse than it is... unless the MONs cannot agree to a quorum any more, there is no "downtime".
Instead let's put all of the Impact points in relative points... Since Ceph is very resilient, Pod restarts should only have an affect if they happen frequently (more than 10 times in a 5min window).

## Impact

* Brief service interruption (e.g., MON restart may cause quorum re-election).
* OSD restart triggers PG peering and potential recovery.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Someone who doesn't know Ceph will not understand this :) (even though it is factually correct)

How do you like my proposal:

If OSDs are restarted frequently or do not start up within 5 minutes, the cluster might decide to rebalance the data onto other more reliable disks. If this happens, the cluster will temporarily be slightly less performant.


## Impact

* Increased I/O latency for RBD/CephFS clients.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RBD and CephFS are Ceph terms. Let's keep it simple and just call them Block, Object and File (all of these would be affected)

## Impact

* Increased I/O latency for RBD/CephFS clients.
* Slower OSD response times, risking heartbeat timeouts.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that's true. If the underlying storage is busy, the process should still be able to send heartbeats?!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mitigation for this is to either move workloads to other storage systems or (preferred) add more disks.
Ceph is one of the few storage systems that grows IO performance linearly with capacity... so more disks = more performance

5. Review Ceph monitor logs if the node hosts MONs:
```bash
oc logs -l app=rook-ceph-mon -n openshift-storage
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another step could be to check switch / networking monitoring to see if any ports are too busy

## Diagnosis


1. Identify affected node(s):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have this step if we get the node name and IP in step #2 from the alert?


## Mitigation

1. Network tuning: Ensure jumbo frames (MTU ≥ 9000) are enabled end-to-end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure Jumbo Frames will help with latency? Why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not exactly sure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can find many "Jumbo Frame" test instructions on the internet - for example this one:
https://blah.cloud/networks/test-jumbo-frames-working/

In the end you use ping with a certain icmp size (which different for the different OSs) and you tell the network stack not to fragment the package (but send it whole).

As a mitigation, customers need to ensure the node network interfaces are configured for 9000 bytes AND that all switches in between the nodes also support 9000 bytes on their ports.


## Mitigation

1. Short term: Throttle non-essential traffic on the node.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how?

Copy link
Member

@mulbc mulbc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm - small error corrections where I added recommendations. Aside from these I'm good with the PR


* If OSDs are restarted frequently or do not start up within 5 minutes,
the cluster might decide to rebalance the data onto other more reliable
disks.If this happens, the cluster will temporarily be slightly less
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
disks.If this happens, the cluster will temporarily be slightly less
disks. If this happens, the cluster will temporarily be slightly less


## Mitigation

* Increase more disks to enhance the performance.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Increase more disks to enhance the performance.
* Add more disks to the cluster to enhance the performance.

## Diagnosis

1. From the alert, note the instance (node IP).
2. Confirm the node does not run OSDs:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is an OSD node and triggers the >100ms alert, we're in trouble :P
So I think this check does not provide any value (we're not doing anything with the data we gather with this)

there are new alerts introduced for odf
health score calculation. This commit adds
runbooks for each of them

Signed-off-by: yati1998 <ypadia@redhat.com>
@yati1998
Copy link
Contributor Author

@weirdwiz please review the PR, I have addressed all the comments.

@yati1998
Copy link
Contributor Author

@agarwal-mudit can you please review the PR, it has been approved by @mulbc and @weirdwiz and all other comments are addressed

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 27, 2026
Copy link

@agarwal-mudit agarwal-mudit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 28, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 28, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: agarwal-mudit, malayparida2000, mulbc, yati1998

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 28, 2026

@yati1998: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 9f906b3 into openshift:master Jan 28, 2026
2 checks passed

## Meaning

A core ODF pod (OSD, MON, MGR, ODF operator, or metrics exporter) has
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why ODF operator and metrics exporter are considered as the core pods?

```bash
iostat -x 2 5
```
4. Correlate with Ceph:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are they supposed to run these commands on the toolbox pod?


## Meaning

ICMP RTT latency to non-OSD ODF nodes (e.g., MON, MGR, MDS, or client nodes)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are client nodes here? csi client?


* Delayed Ceph monitor elections or quorum instability.
* Slower metadata operations in CephFS.
* Increased latency for CSI controller operations.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about the CSI node operation and csi-addons operations?


1. From the alert, note the instance (node IP).
2. Test connectivity:
```bash
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from there they are suppose to run these commands?

3. Check system load and network interface stats on the node:
```bash
oc debug node/<node-name>
sar -n DEV 1 5
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add details what is 1 and 5 here and how to get it

## Diagnosis

1. Check the alert’s instance label to get the node IP.
2. From a monitoring or debug pod, test connectivity:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we provide a example command to get the monitoring or debug pod

top -b -n 1 | head -20
sar -u 1 5
```
5. Check Ceph health and OSD status:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is toolbox already enabled or they need to enable it?

@malayparida2000
Copy link
Contributor

@yati1998 Please take a look at the concerns Madhu has raised & address them in a follow up PR if possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants