-
Notifications
You must be signed in to change notification settings - Fork 306
🐛 Remove invalid kustomizeconfig #3685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Remove invalid kustomizeconfig #3685
Conversation
This change has no effect on the output of this kustomization because the removed configuration was redundant. However, it fixes a bug which can be triggered when using this kustomization as a base for another kustomization. We remove 2 types of configuration. The first is redundant, the second is incorrect and triggers the bug. Firstly we remove nameReference configurations which specify references which are already part of kustomize's defaults. These were found in: * config/govmomi/webhook/kustomizeconfig.yaml * config/supervisor/webhook/kustomizeconfig.yaml Secondly we remove all namespace configurations as these were all incorrect. These directed the namespace transformer to update webhooks/clientConfig/service/namespace. However, this is not the intended function of the namespace transformer: it should only set the namespace directly on objects and allow references to be updated automatically by nameReference. Configuring it to update a reference directly leaves kustomize with inconsistent internal state. Depending on execution order this can cause a subsequent transformation to fail to update the reference when it makes further changes to the Service object.
|
/test help |
|
@chrischdi: The specified target(s) for The following commands are available to trigger optional jobs: Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Tested via: kustomize build --load-restrictor LoadRestrictionsNone config/default > $OUTDIR/default.yaml
kustomize build --load-restrictor LoadRestrictionsNone config/supervisor > $OUTDIR/supervisor.yamlAnd comparing outputs, there was no diff 👍 /test pull-cluster-api-provider-vsphere-e2e-govmomi-main |
|
/lgtm |
|
/hold Pending CI |
|
LGTM label has been added. Git tree hash: 4f6e44836acdd87448e00682425b21ed74b2fbf8
|
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: chrischdi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/hold cancel |
|
Thx! |
What this PR does / why we need it:
This change has no effect on the output of this kustomization because
the removed configuration was redundant. However, it fixes a bug which
can be triggered when using this kustomization as a base for another
kustomization.
We remove 2 types of configuration. The first is redundant, the second
is incorrect and triggers the bug.
Firstly we remove nameReference configurations which specify references
which are already part of kustomize's defaults. These were found in:
Secondly we remove all namespace configurations as these were all
incorrect. These directed the namespace transformer to update
webhooks/clientConfig/service/namespace. However, this is not the
intended function of the namespace transformer: it should only set the
namespace directly on objects and allow references to be updated
automatically by nameReference. Configuring it to update a reference
directly leaves kustomize with inconsistent internal state. Depending on
execution order this can cause a subsequent transformation to fail to
update the reference when it makes further changes to the Service
object.
This issue is common amongst multiple providers, likely because it was present in an early version of the kubebuilder templates.
This PR is based on kubernetes-sigs/cluster-api-provider-azure#5982
To confirm that the removed configuration was redundant: