-
Notifications
You must be signed in to change notification settings - Fork 1.6k
KEP-5471 Extended Toleration Operators for Threshold-Based Placement #5473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
KEP-5471 Extended Toleration Operators for Threshold-Based Placement #5473
Conversation
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] | ||
- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes | ||
|
||
## Summary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we include the scenario of adding tolerations on the semantic version comparison? It will likely require either a new operator or some way to express that a taint's string needs to be parsed as a semver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current scope for this KEP is integer-only support for SLA/failure-probability because it’s the simplest and safest to implement (avoids floating-point parsing and complex type semantics).
I agree that semver comparisons are a valid and important future use case (e.g., kubelet version taints, device firmware versions). To keep this KEP narrowly scoped and implementable, I propose to document the semantic version comparison in a new Future Work section. And also consider if we would add this to the node affinity as well. wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One consideration here is what is the delta here. It is a lot of red tape to run thru scenarios and API reviews. Making two changes together may be much easier than splitting into separate KEPs.
I am not objecting to this scope though. This is not blocking proposal, but I believe it will be very useful.
BTW, we may also want to support semver in CEL as well =). So potentially another scope increase
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the scope to include both semver
and CEL
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sanposhiho fyi for semver addition.
|
||
- New operators: `Lt`, `Le`, `Ge`, `Gt` (in addition to existing `Equal`/`Exists`). | ||
- Primary motivation: allow pods to opt‑in to nodes by `SLA/failure‑probability` values published as taints (e.g., `node.kubernetes.io/sla=950`). | ||
- Scheduler impact is limited to the existing TaintToleration Filter; no new stages or algorithms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the scoring for taints with PreferNoSchedule
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great Question. For PreferNoSchedule
taints, numeric operators only determine whether the taint is considered as tolerated for scoring, that maintains consistent soft-preference behavior while enabling threshold-based SLA matching. I've updateed the KEP with more details:
https://github.com/kubernetes/enhancements/pull/5473/files#diff-91266bbd6ed6b175ba3fea357b964cc03e25c3f95351ba69b8f63711d9875ac7R450
Signed-off-by: Heba Elayoty <[email protected]>
Signed-off-by: Heba Elayoty <[email protected]>
2a36559
to
c9e75ba
Compare
Signed-off-by: Heba Elayoty <[email protected]>
Signed-off-by: Heba Elayoty <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: helayoty The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @dom4ha @sanposhiho |
- Enhance CEL semver library with additional comparison functions for consistent version handling. | ||
- Keep behavior consistent with existing effects (`NoSchedule`, `PreferNoSchedule`, `NoExecute`). | ||
- Provide unified semantic version handling across scheduling and admission control. | ||
- Backward compatible and opt‑in via feature gates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it fair to add a goal to ensure that the addition of new operators has zero operational effect on existing pod scheduling outcomes using the Equality or Exists operators? I'm thinking of the various switch toleration.Operator
bits in source and assuming that adding more operators should have zero cost to existing scenarios.
|
||
In addition to general scheduling improvements, SLA‑aware opt‑in via tolerations has specific advantages for `Dynamic Resource Allocation (DRA)` and `AI/ML`: | ||
|
||
- DRA steers GPUs/accelerators resource claims by node reliability: critical workloads get high‑SLA capacity while batch workloads use cheaper pools. Taints block risky pools and evict when capacity degrades. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe we add "interruptible batch workloads" or checkpoint-able or other similar language to recognize that there is such a thing as a "critical batch workload" that would not be able to benefit from running on interruptible hardware.
|
||
As a cluster operator, I want a default repel from spot (low-SLA) nodes so that only workloads that explicitly tolerate them can land there. | ||
|
||
I also want to set numeric SLA thresholds in tolerations (e.g., `Gt 950`) so pods can opt-in to reliable nodes or specific SLA bands without having to hardcode every SLA class in NodeAffinity rules. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something to consider: for every scenario that is (strictly speaking) possible using existing semantics (e.g., NodeAffinity) it might be helpful to include that scenario here. A before/after type picture. Not wanting to add too much information to the KEP, but that would help folks to assess the ergonomic improvements.
spec: | ||
tolerations: | ||
- key: node.kubernetes.io/sla | ||
operator: Gt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be Ge?
- key: node.kubernetes.io/sla | ||
operator: Gt | ||
value: "750" | ||
effect: NoSchedule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can also inclue an example pod spec that will explicitly not get scheduled onto a spot node, e.g.,
---
# Critical workload will not be scheduled until a suitable high reliability node has capacity
apiVersion: v1
kind: Pod
metadata:
name: critical-workload
spec:
tolerations:
- key: node.kubernetes.io/sla
operator: Gt
value: "950"
effect: NoSchedule
This ensures DRA allocations are both resource-correct and reliability-compliant. | ||
**Example Configuration:** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add the ResourceSlice example that includes the taint that ResourceClaim "gpu-claim-high-sla" will tolerate.
value: "24" | ||
effect: NoSchedule | ||
--- | ||
# Batch training workload tolerates degraded devices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Let's call this a "Short-lived batch training workload" to distinguish from other (very normal) batch workloads that would not want to be interrupted in an hour.
#### Story 6 — Kubernetes version compatibility for critical workloads | ||
As a cluster operator managing a mixed-version Kubernetes cluster during rolling upgrades, I want to ensure critical workloads only run on nodes with Kubernetes version >= 1.20.0 due to specific API features they require, while allowing development workloads to tolerate older versions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: to make this more current, let's add 1.33/1.34 as our two example versions here
spec: | ||
tolerations: | ||
- key: node.kubernetes.io/version | ||
operator: SemverGt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SemverGe for this and below in this example?
- Values must conform to [Semantic Versioning 2.0.0](https://semver.org/) specification requiring exactly 3 components (major.minor.patch) | ||
- Supports both prefixed (`"v1.20.1"`) and non-prefixed (`"1.20.1"`) formats following Kubernetes conventions | ||
- Invalid semver strings (e.g., `"1.20"`, `"1.20.1.1"`, `"1"`) cause validation errors | ||
- Pre-release versions (e.g., `1.20.0-alpha.1`) have lower precedence than release versions (`1.20.0`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: the pre-release and build metadata points are part of semver, so we could drop them. Or if being super explicit is helpful, let's add the "per semver specification" suffix to the Pre-release versions line item as well.
- Pre-release versions (e.g., `1.20.0-alpha.1`) have lower precedence than release versions (`1.20.0`) | ||
- Build metadata (`+build.123`) is ignored in comparisons per semver specification | ||
|
||
- **Type Mismatch Handling**: If toleration and taint values cannot be parsed as the same type (integer vs semver), the toleration does not match. This prevents unexpected behavior and ensures type safety. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implies I could have two nodes on my cluster like this:
apiVersion: v1
kind: Node
metadata:
name: node-1
spec:
taints:
- key: node.kubernetes.io/foo
value: "100"
effect: NoExecute
---
apiVersion: v1
kind: Node
metadata:
name: node-2
spec:
taints:
- key: node.kubernetes.io/foo
value: "v1.34.0"
effect: NoExecute
And that the scheduler would simply schedule workloads with proper tolerations using either of the above semantics (int comparison or semver comparison) to the right node.
Also, apiserver has no preference between either of the below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: inference-service
spec:
template:
spec:
tolerations:
- key: node.kubernetes.io/sla
operator: Gt
value: "950"
effect: NoExecute
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: inference-service
spec:
template:
spec:
tolerations:
- key: node.kubernetes.io/sla
operator: Gt
value: "v1.33.0"
effect: NoExecute
During scheduling, if we encounter a matching node taint whose value is of a different type than the workload toleration (int vs semver) then we simply continue and rule out that node as a candidate for scheduling.
Just thinking out loud here, did I get that right?
|
||
- **Parsing Overhead**: Each taint/toleration match with comparison operators requires parsing attempts (integer first, then semver), adding computational cost. | ||
|
||
> Note: A taint like `foo=95.5:NoSchedule` or `foo=v1.20.1:NoSchedule` is valid since taint values follow label values syntax. The type detection and parsing occurs during toleration matching, not validation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this observation worth including as a Risk? E.g.:
**Risk**: Invalid taints meant to be used with the new comparison operators (e.g., `node.kubernetes.io/sla=95.5` and `node.kubernetes.io/version=1`) are not detected at admission time
#### Story 6 — Kubernetes version compatibility for critical workloads | ||
As a cluster operator managing a mixed-version Kubernetes cluster during rolling upgrades, I want to ensure critical workloads only run on nodes with Kubernetes version >= 1.20.0 due to specific API features they require, while allowing development workloads to tolerate older versions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to ensure critical workloads only run on nodes with Kubernetes version >= 1.20.0 due to specific API features they require, while allowing development workloads to tolerate older versions.
Why try adding SemverGt
to the toleration, not nodeaffinity? I liked your argument on Why not NodeAffinity alone?
section, but I'm not sure that reasoning would justify this semver addition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just saw you added this semver thing at the last commit along with CEL, which grew the scope of this KEP. I prefer to have another KEP to discuss them.
Uh oh!
There was an error while loading. Please reload this page.