Replies: 1 comment
-
|
This discussion is downstream of a question I asked in #ask-community Gruntwork Slack. Here is the current design of the module I am implementing at $JOB. Thanks for opening it @ryehowell k8s < 1.24: "Self-managed" CoreDNSNot using the EKS add-on because pod tolerations are not supported in the EKS Addon schema until v1.24. Pod tolerations are the primary reason we want to manipulate the EKS-provided CoreDNS Deployment. Implemented as a module to be called from the main EKS module, as follows: variable "pod_tolerations" {
description = "Configure tolerations rules to allow the Pod to schedule on nodes that have been tainted. Each item in the list specifies a toleration rule."
type = list(object({
key = string
operator = string
value = string
effect = string
}))
default = [
{
"key" = "dedicated"
"operator" = "Equal"
"value" = "core"
"effect" = "NoSchedule"
},
]
}
locals {
pod_tolerations_patches = [for toleration in var.pod_tolerations :
{
"op" = "add"
"path" = "/spec/template/spec/tolerations/-"
"value" = toleration
}
]
# Originally we had multiple patch locations. Leaving this here because it's potentially useful in the future.
deployment_patches = concat(
local.pod_tolerations_patches,
[
# local.some_other_patch,
],
)
}
# Putting the JSON in a local file because passing it directly to CLI args just would NOT escape correctly
resource "local_file" "coredns_workload_patch" {
count = var.patch_coredns_workloads ? 1 : 0
content = jsonencode(local.deployment_patches)
filename = "${path.cwd}/tmp/coredns_workload_patch.json"
}
resource "null_resource" "patch_coredns_workload" {
count = var.patch_coredns_workloads ? 1 : 0
triggers = {
# The endpoint changes when the cluster is redeployed, the ARN does not
# so we will use the ARN for endpoint lookup at exec time
eks_cluster_arn = var.eks_cluster_arn
# Link to the configuration inputs so that this is done each time the configuration changes.
deployment_patches = local_file.coredns_workload_patch[0].content
}
provisioner "local-exec" {
command = join(" ",
[
"kubergrunt", "k8s", "kubectl", "--kubectl-eks-cluster-arn", self.triggers.eks_cluster_arn, "--",
"--namespace=kube-system",
"patch", "deployment", "coredns",
"--type=json", "--patch-file", local_file.coredns_workload_patch[0].filename,
# "--dry-run",
]
)
}
# We are assuming that nothing other than this module will have ever updated the Deployment
provisioner "local-exec" {
when = destroy
command = join(" ",
[
"kubergrunt", "k8s", "kubectl", "--kubectl-eks-cluster-arn", self.triggers.eks_cluster_arn, "--",
"--namespace=kube-system",
"rollout", "undo", "deployment", "coredns",
# "--dry-run",
]
)
}
}k8s >= 1.24: EKS Addon-Managed CoreDNSImplemented in the main EKS management module, as follows: variable "eks_addons" {
type = any
description = "Configuration of EKS add-ons (e.g. coredns)"
default = {
coredns = {
resolve_conflicts = "OVERWRITE"
configuration_values = {
tolerations = [
{
"key" = "dedicated"
"operator" = "Equal"
"value" = "core"
"effect" = "NoSchedule"
},
]
}
}
}
}
module "eks_cluster_control_plan" {
source = "git::[email protected]:gruntwork-io/terraform-aws-eks.git//modules/eks-cluster-control-plane?ref=v0.61.0.orlater"
enable_eks_addons = true
eks_addons = var.eks_addons
}An open question I have on the path of managing via EKS Add-on is: What is the behavior when updating Primarily I'm looking for answers to this question. Secondarily I wouldn't mind feedback on my design approach. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The process of upgrading/converting from a "self-managed" EKS AddOn to a "managed" EKS AddOn often isn't clear on what configurations will be saved/re-used vs configurations that may/will be overwritten by the update.
Details soon to come, work in progress ....
Tracked in ticket #110404
Beta Was this translation helpful? Give feedback.
All reactions