-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Add graduation criteria for CapacityBuffers. #8886
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,6 +2,30 @@ | |
|
|
||
| #### Author: Justyna Betkier (jbtk) | ||
|
|
||
| # Timeline | ||
|
|
||
| ## Alpha (launched to 1.34) | ||
|
|
||
| - [x] Implement the API definition | ||
| - [x] Implement the buffer controller and fake pod processing logic in the cluster autoscaler | ||
|
|
||
| ## Beta graduation criteria (planned for 1.35) | ||
|
|
||
| - [ ] Implement integration with k8s resource quotas | ||
|
|
||
| ## V1 graduation criteria (planned for TBD) | ||
|
|
||
| - [ ] E2e test implemented and healthy | ||
| - [ ] In beta for at least 1 full version | ||
| - [ ] Waiting up to 2 versions in beta for second OSS implementation | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So you'd target 1.37. While I fully expect us to come to a decision before then, I'd hope we could have a conversation rather than falling back to lazy consensus.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is what I was trying to write with "we will reevaluate the graduation criteria with sig-autoscaling leads based on...immediate future plans". So this is not a lazy consensus but a converations with sig-leads. Please let me know what is not clear. |
||
| (karpenter). In case of no implementation in order to avoid permanent beta | ||
| (following the spirit of [guidance for k8s REST APIs](https://kubernetes.io/blog/2020/08/21/moving-forward-from-beta/#avoiding-permanent-beta)) | ||
| we will reevaluate the graduation criteria with sig-autoscaling leads based on: | ||
| - existing adoption and feedback | ||
| - reasons for no implementation and immediate future plans | ||
| - [ ] Graduation plan announced 1 month in advance on sig-autoscaling meeting to allow time for feedback | ||
| - [ ] Reviewed and summarized all open issues about buffers in the https://github.com/kubernetes/autoscaler/ repository | ||
|
|
||
| # Summary | ||
|
|
||
| If the user uses autoscaling the cluster size will be adjusted to the number of | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't 1.35 more or less closed at this point? I thought it was launching very soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cluster autoscaler does not do freezes like core kubernetes and releases some time after core kubernetes. I would say there is still a chance (and we will try to do this), but I am not confident that we will make it.