-
Notifications
You must be signed in to change notification settings - Fork 50
Apply provider manifests with SSA #259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Skipping CI for Draft Pull Request. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/test |
|
@mdbooth: The The following commands are available to trigger optional jobs: Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/test all |
|
Interesting observation: with this change kube-apiserver now reports that it's reloading all the provider CRDs every 5 minutes. However, it was also doing that before. I think this might be good, and there's also another issue. |
|
Looks like everything interesting passed. I'm pushing an update which:
|
f0f9f6a to
354bd1d
Compare
|
That last commit reduced the number of reconciles from 67 without this PR to 10. However:
Still think this is an improvement. |
|
That last commit is probably too much for this PR. I'll split it into a separate PR if you want it. Because we're now iterating directly over objects, it's trivial to add an owner reference to every object that we create. This allows us to be much more explicit in the objects we watch. It's also a more obvious fit for the behaviour of this controller: the controller doesn't really manage objects which have a capi provider label; it manages objects it created, whatever label they have. It also gives us a way to implement the automatic removal of objects which are no longer referenced in a provider ConfigMap. This actually came up in CAPO when we removed our MutatingWebhook! Clusterctl implements this correctly, but users who were doing it manually had failed installations after upgrade because there was a floating MutatingWebhook referring to an endpoint which no longer existed. |
8a4a39f to
bb5ac06
Compare
We were unconditionally triggering reconciles for all modifications to any managed object. This was producing a large number of unnecessary reconciles when the status of managed objects was updated.
bb5ac06 to
41daec5
Compare
|
@mdbooth: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Replaced by #292 as I no longer have write access to this fork. |
This change uses Server-Side Apply to apply provider manifests in place of the logic from library-go, which uses an Update with custom kind-specific client-side merge logic. Using SSA here should be equivalent to how these manifests would be applied by any other client.
The merge logic from library-go should no longer be required as long as the manifests are not specifying any fields which will be overwritten by an existing controller.
In particular, there should be no conflict with service-ca setting
caBundlein various places, as long as the specified manifests do not include a caBundle. However, note that due to a validation bug in older versions of k8s, some CRDs do still specify an empty caBundle in their CRDs. These would have to be removed for this to work.The expected flow of reconciles between cluster-capi-operator and service-ca-operator becomes:
The final update will produce no change as long as no managed fields have been updated, so there will be no further reconciles.
However, objects with the provider label were being reconciled too often due to insufficient filtering. This PR includes a second commit which addresses that.