diff --git a/keps/prod-readiness/sig-node/5309.yaml b/keps/prod-readiness/sig-node/5309.yaml new file mode 100644 index 00000000000..646aec6745a --- /dev/null +++ b/keps/prod-readiness/sig-node/5309.yaml @@ -0,0 +1,3 @@ +kep-number: 5309 +alpha: + approver: "@jpbetz" diff --git a/keps/sig-node/5309-self-orchestrating-pod/README.md b/keps/sig-node/5309-self-orchestrating-pod/README.md new file mode 100644 index 00000000000..d0dbad195db --- /dev/null +++ b/keps/sig-node/5309-self-orchestrating-pod/README.md @@ -0,0 +1,877 @@ +# KEP-5309: Pod self-orchestration + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Key Features & Benefits](#key-features--benefits) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [Error handling](#error-handling) + - [User Stories (Optional)](#user-stories-optional) + - [Restart the job](#restart-the-job) + - [Other ideas](#other-ideas) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +There are many scenarios when a Pod consist of a main task and a sidecar +container responsible for this task orchestration. The sidecar in this scenario +receives a signal indicating whether the task is still needed. Allowing the +orchestration sidecar to terminate or restart the main task would allow fast +reaction on this signal. + +The KEP introduces the communication channel between kubelet and a container in +a Pod, which may be extended to a lot of other scenarios in future. + +## Motivation + +Kubernetes workloads today rely on the API Server for pod creation, deletion, +and lifecycle management. However, certain advanced use cases require +self-managed, dynamic pod orchestration within a node while minimizing direct +API Server interactions. This KEP proposes Self-Orchestrating Pods (SOPs), a +mechanism that allows a pod to create, manage, and terminate its +containers without requiring constant communication with the Kubernetes control +plane or escalating privileges to get access to CRI API. + +This capability is particularly useful in resource-intensive environments where +high-frequency pod management operations could lead to excessive API Server +load and unnecessary delays in effective work, stalling the expensive resources. +For instance, in GPU workloads, features like NVIDIA IMEX (Inter-GPU +Memory Exchange) require auxiliary daemon processes running alongside workloads. +Instead of relying on external DaemonSets or the API Server for deployment, a +Self-Orchestrating Pod would dynamically spawn and manage its own supporting +containers based on defined constraints. + +By extending this concept beyond GPU and DRA-specific implementations, +Self-Orchestrating Pods could benefit other high-performance computing (HPC), +real-time systems, edge computing, and AI/ML workloads that require fine-grained +control over their execution environment. + +### Key Features & Benefits + +- Reduced API Server Load: Workloads manage their own supporting containers + without frequent API Server interactions. +- Fine-Grained Workload Control: Pods can create and terminate sub-containers + dynamically within the allowed resource constraints. +- RBAC-Constrained Execution: Sub-containers inherit equal or lower permissions + than the parent pod, ensuring security compliance. +- Broader Use Cases: Supports GPU/DRA drivers, real-time processing + applications, security workloads, and more. + +This KEP aims to introduce a new architectural paradigm where pods become +self-contained orchestration units, capable of managing their own execution +environment efficiently while adhering to Kubernetes’ security and resource +constraints. + +### Goals + +- Declare an API to instruct kubelet to establish two ways communication channel with + the specified container in the Pod. +- Declare the communication protocol between the kubelet and a container + that is versioned and extensible. +- Declare enough primitives to satisfy two scenarios: + - Sidecar to be able to terminate the main container in the Pod effectively + stopping the Job execution. + - Sidecar to be able to restart the main container and receive a signal that it was + restarted. This will allow in-place restart of a single Pod of a large training job to + restart the job from the last checkpoint. + +### Non-Goals + +- Allow adding new containers to the Pod or other extended scenarios for the + communication channel between the kubelet and a container. + +## Proposal + +The overall idea will be to expose the gRPC endpoint from the container +and declare it in the container spec. Kubelet will connect to this endpoint +by the well-known versioned API. The API will consist of notifications sent +from kubelet to the container as well as a streaming method that will allow +the container to send signal back to kubelet. + +The following Pod declares the sidecar with the port opened and a `podManagement` +declaration telling kubelet to connect to the specified `port` with the +protocol `version` set to `1.0`. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: self-orchestrating-pod +spec: + restartPolicy: Never + containers: + - name: main-task + image: myorg/main-task:latest + initContainers: + - name: orchestrator-sidecar + image: myorg/orchestrator-sidecar:latest + restartPolicy: Always + # Sidecar responsible for orchestration + ports: + - containerPort: 50051 + podManagement: + port: 50051 + version: 1.0 +``` + +The gRPC protocol may be declared as the following: + +```proto +package podmanagement.v1; + +// Notification message for container events. +message ContainerEvent { + string container_name = 1; + string event_type = 2; // e.g., "STARTED", "EXITED" + int32 exit_code = 3; // Only set if event_type is "EXITED" + string message = 4; // Optional details + Timestamp time = 5; // Time when the event happened. +} + +// Request for sending a container event notification. +message NotifyContainerEventRequest { + ContainerEvent event = 1; +} + +// Response for notification for the future extensibility. +message NotifyContainerEventResponse { +} + +// Command to terminate a container. +message TerminateContainerCommand { + string container_name = 1; + int32 exit_code = 2; +} + +// Command to terminate a container. +message TerminateContainerCommandResponse { + // Most likely error is that container was already terminated. + string error_description = 1; +} + +// Command wrapper (for extensibility). +message Command { + oneof command { + TerminateContainerCommand terminate_container = 1; + } +} + +// Response for command stream. +message CommandResponse { + oneof commandResponse { + TerminateContainerCommandResponse terminate_container_response = 1; + } +} + +service PodManagement { + // kubelet notifies the orchestration sidecar about container events. + rpc NotifyContainerEvent(NotifyContainerEventRequest) returns (NotifyContainerEventResponse); + + // Stream commands from the orchestrator sidecar to the kubelet. + rpc CommandStream(stream CommandResponse) returns (stream Command); +} +``` + +Handing of pod management will take two threads per container in the kubelet. +One for the notifications queue and one for the stream listening. + +There will be limits on the notification queue length and the exponential backoff timeout on commands stream. + +Current version will also not provide a way to reconcile the containers status in case +some notifications were lost, the orchestration sidecar was restarted, +or kubelet was restarted. + +### Error handling + +Since the pod management channel becomes a critical part of a SOP, +kubelet inability to establish connection will be treated +as a liveness signal for the orchestrating sidecar. + +The pod management connection will be first attempted to be established +when the container declared itself as `Ready`. This way, `startupProbe` +can be used to ensure that it is initialized. + +If pod management connection cannot be established while the container is +in Ready state, the container will be terminated the same way as liveness +check will. + +### User Stories (Optional) + +#### Restart the job + +1. The sidecar container orchestrates the job. Job is a heavy process requiring + special GPU hardware connected with other Pod. +2. The sidecar receives the signal that the job should be abruptly terminated + and started from the beginning. +3. Instead of terminating the whole Pod, sidecar issues a command to kubelet to + restart a specific container. +4. Kubelet will report back when the container is restarted. +5. Sidecar may need to keep other sidecar containers running or have them also + be restarted, depending on the function of that sidecar container. Ordering + of requests to the kubelet to restart things will be a sidecar decision. + +#### Other ideas + +1. As a model-training researcher running multi-phase workloads, I want the + training pod to manage its own container lifecycle locally, so that only the + trainer container restarts without triggering a full pod rescheduling +2. As a platform engineer, I want to manage the init stage of containers within + the pod, so that I can control behaviors like image pull timeouts and retry + limits without relying on hardcoded kubelet defaults or global + configurations. +3. As a platform engineer, I want to send signals to individual containers + inside a pod without deleting the pod, so that I can gracefully terminate or + restart containers while preserving logs, metadata, and the original pod + spec. + + +### Notes/Constraints/Caveats (Optional) + + + +### Risks and Mitigations + +The container port of pod management conatainer will be exposed to the node +network so kubelet can connect to it. This creates a risk that malicious code +will be sending bad notifications. + +The orchestrating sidecar will need to be written to expect multiple sources of notifications +and should be expecting malicious sender of those notifications. +No protection from this attack is planned to be implemented. + +## Design Details + + + +### Test Plan + + + +[ ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +#### Prerequisite testing updates + + + +#### Unit tests + + + + + +- ``: `` - `` + +#### Integration tests + + + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/integration/...): [integration master](https://testgrid.k8s.io/sig-release-master-blocking#integration-master?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +#### e2e tests + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/e2e/...): [SIG ...](https://testgrid.k8s.io/sig-...?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-node/5309-self-orchestrating-pod/kep.yaml b/keps/sig-node/5309-self-orchestrating-pod/kep.yaml new file mode 100644 index 00000000000..70ef708eec2 --- /dev/null +++ b/keps/sig-node/5309-self-orchestrating-pod/kep.yaml @@ -0,0 +1,45 @@ +title: Pod self-orchestration +kep-number: 5309 +authors: + - "@SergeyKanzhelev" + - "@ArangoGutierrez" +owning-sig: sig-node +participating-sigs: + - sig-apps +status: provisional # provisional|implementable|implemented|deferred|rejected|withdrawn|replaced +creation-date: 2025-05-16 +reviewers: + - TBD + - "@alice.doe" +approvers: + - TBD + - "@oscar.doe" + +# The target maturity stage in the current dev cycle for this KEP. +# If the purpose of this KEP is to deprecate a user-visible feature +# and a Deprecated feature gates are added, they should be deprecated|disabled|removed. +stage: alpha #alpha|beta|stable + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.34" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.34" + #beta: "v1.20" + #stable: "v1.22" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: PodSelfOrchestration + components: + - kubelet + - kube-apiserver +disable-supported: true + +# The following PRR answers are required at beta release +metrics: + - TBD