Question about setting Pod and Service Subnets, and also Node IP and DNS address #11882
-
|
I am trying a setup of Kubernetes kube-dns service
Log from hubble-relay See how Hubble Relay is trying to reach the DNS using the default Talos service subnet DNS address. Log cattle-cluster-agent I have tried setting the Running the command Comming from a more of a "everything is handed to you solution" like RKE2. Is this way of working expected, and required, or am I missing something else with the setup? Usefull info: CNI: Cilium with kube-proxy replacement (config at the bottom) Talos OpenTofu config variable "talos_version" { default = "1.11.1" }
variable "kubernetes_version" { default = "1.34.0" }
variable "kubernetes_vip_address" { default = "10.182.76.10" }
variable "kubernetes_pod_cidr_list" { default = ["10.42.0.0/16"] }
variable "kubernetes_service_cidr_list" { default = ["10.43.0.0/16"] }
variable "kubernetes_dns_address_list" { default = [ "10.43.0.10" ] }
variable "nameserver_addresses" { default = [
"10.185.10.10",
"10.185.11.11"
]}
resource "talos_machine_secrets" "global" {}
data "talos_machine_configuration" "controlplane" {
talos_version = var.talos_version
kubernetes_version = var.kubernetes_version
cluster_name = var.cluster_name
machine_type = "controlplane"
cluster_endpoint = "https://${var.kubernetes_vip_address}:6443"
machine_secrets = talos_machine_secrets.global.machine_secrets
config_patches = [
yamlencode({
machine = {
time = {
servers = var.nameserver_addresses
}
network = {
interfaces = [{
interface = "eth0"
vip = {
ip = var.kubernetes_vip_address
}
}]
}
kubelet = {
clusterDNS = var.kubernetes_dns_address_list
}
}
})
]
}
resource "talos_machine_configuration_apply" "controlplane" {
count = var.cluster_controller_count
node = var.controller_address_list[count.index]
client_configuration = talos_machine_secrets.global.client_configuration
machine_configuration_input = data.talos_machine_configuration.controlplane.machine_configuration
config_patches = [
yamlencode({
cluster = {
network = {
cni = {
name = "none"
}
podSubnets = var.kubernetes_pod_cidr_list
serviceSubnets = var.kubernetes_service_cidr_list
}
externalCloudProvider = {
enabled = true
}
proxy = {
disabled = true
}
extraManifests = [
rancher2_cluster.imported.cluster_registration_token[0].manifest_url
]
}
machine = {
kubelet = {
nodeIP = {
validSubnets = [
"${var.controller_address_list[count.index]}/32"
]
}
}
}
})
]
}
data "talos_machine_configuration" "worker" {
talos_version = var.talos_version
kubernetes_version = var.kubernetes_version
cluster_name = var.cluster_name
machine_type = "worker"
cluster_endpoint = "https://${var.kubernetes_vip_address}:6443"
machine_secrets = talos_machine_secrets.global.machine_secrets
config_patches = [
yamlencode({
machine = {
time = {
servers = var.nameserver_addresses
}
kubelet = {
clusterDNS = var.kubernetes_dns_address_list
}
}
})
]
}
resource "talos_machine_configuration_apply" "worker" {
count = var.cluster_worker_count
node = var.worker_address_list[count.index]
client_configuration = talos_machine_secrets.global.client_configuration
machine_configuration_input = data.talos_machine_configuration.worker.machine_configuration
config_patches = [
yamlencode({
machine = {
kubelet = {
nodeIP = {
validSubnets = [
"${var.worker_address_list[count.index]}/32"
]
}
}
}
})
]
}
resource "talos_machine_bootstrap" "cluster" {
node = var.controller_address_list[0]
client_configuration = talos_machine_secrets.global.client_configuration
depends_on = [talos_machine_configuration_apply.controlplane, talos_machine_configuration_apply.worker]
}Cilium config cgroup:
autoMount:
enabled: false
hostRoot: /sys/fs/cgroup
devices: eth0
hubble:
enabled: true
relay:
enabled: true
ui:
enabled: true
ipam:
mode: kubernetes
k8sServiceHost: localhost
k8sServicePort: 7445
kubeProxyReplacement: true
nodePort:
enabled: true
securityContext:
capabilities:
ciliumAgent:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
cleanCiliumState:
- NET_ADMIN
- SYS_ADMIN
- SYS_RESOURCE |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
|
There are too many questions mixed together, but in general changing pod/service subnets is not easy for a running cluster, set them at the cluster creation time and you're good. If you have multiple IPs per node, see https://www.talos.dev/v1.11/talos-guides/network/multihoming/ Also see https://www.talos.dev/v1.11/introduction/troubleshooting/#conflict-on-kubernetes-and-host-subnets |
Beta Was this translation helpful? Give feedback.
DNS address is set correctly if you put correct config. It might be that you don't push service/pod subnets to the workers in your case.