-
Notifications
You must be signed in to change notification settings - Fork 638
Description
/kind bug
AWSManagedMachinePool is unable to effectively set minSize/maxSize depending on MachinePool replicas which is used as the desiredSize. This is especially troublesome when using cluster-autoscaler (with cluster.x-k8s.io/replicas-managed-by annotation on MachinePool).
Example:
When -
MachinePool.spec.replicas = 0
AWSManagedMachinePool.spec.scaling = {minSize: 0, maxSize: 5}
We want to update the node group to always have 2 nodes.
Update via gitops - AWSManagedMachinePool.spec.scaling = {minSize: 2, maxSize: 5}
Reconciliation fails with 400 from AWS stating that desiredReplicas 0 cannot be less than minSize: 2. The same will probably happen if we try to lower maxSize < MachinePool.spec.replicas.
Workaround:
- remove
cluster.x-k8s.io/replicas-managed-byannotation - set MachinePool replicas manually, set minSize on AWSManagedMachinePool.
What did you expect to happen:
// ReplicasManagedByAnnotation is an annotation that indicates external (non-Cluster API) management of infra scaling.
// The practical effect of this is that the capi "replica" count should be passively derived from the number of observed infra machines,
// instead of being a source of truth for eventual consistency.
// This annotation can be used to inform MachinePool status during in-progress scaling scenarios.
ReplicasManagedByAnnotation = "cluster.x-k8s.io/replicas-managed-by"
Setting minSize > observed replica count will bump desiredSize = minSize. Could be some push back on this on the opposite end where setting maxSize < observed replica count will drain nodes when setting desiredSize = maxSize.
It would be truer to definition if desiredSize was set with a clamp based on configured minSize/maxSize.
Environment:
- Cluster-api-provider-aws version: latest
- Kubernetes version: (use
kubectl version): n/a - OS (e.g. from
/etc/os-release): n/a