Replies: 1 comment
-
Thanks, we have some sanity checks as well as e2e suites. I tested the apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: capi-quickstart-kubevirt
namespace: default
spec:
addons:
coreDNS: {}
kubeProxy: {}
controlPlane:
deployment:
registrySettings:
apiServerImage: kube-apiserver
controllerManagerImage: kube-controller-manager
registry: registry.k8s.io
schedulerImage: kube-scheduler
replicas: 2
service:
serviceType: LoadBalancer
dataStore: default
dataStoreSchema: default_capi_quickstart_kubevirt
dataStoreUsername: default_capi_quickstart_kubevirt
kubernetes:
admissionControllers:
- CertificateApproval
- CertificateSigning
- CertificateSubjectRestriction
- DefaultIngressClass
- DefaultStorageClass
- DefaultTolerationSeconds
- LimitRanger
- MutatingAdmissionWebhook
- NamespaceLifecycle
- PersistentVolumeClaimResize
- Priority
- ResourceQuota
- RuntimeClass
- ServiceAccount
- StorageObjectInUseProtection
- TaintNodesByCondition
- ValidatingAdmissionWebhook
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
- ExternalIP
version: v1.32.1
networkProfile:
clusterDomain: cluster.local
podCidr: 10.243.0.0/16
port: 6443
serviceCidr: 10.95.0.0/16 All This is the apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: ""
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpBeLiberal: false
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
udpStreamTimeout: 0s
udpTimeout: 0s
detectLocal:
bridgeInterface: ""
interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
localhostNodePorts: null
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
metricsBindAddress: ""
mode: ""
nftables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
winkernel:
enableDSR: false
forwardHealthCheckVip: false
networkName: ""
rootHnsEndpointName: ""
sourceVip: ""
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://172.18.255.101:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system These are the pods:
It seems something is particular to your setup. I'd like to understand more about it so we can assess this as a bug and introduce a related fix. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey Maintainers, first of all I would like to congratulate you on a great project. I have found myself in the need of virtual control planes, Kamaji saved me so much time. It's so cool to find it in such an advanced state, things work so nice. Thank you.
Now back to the problem I am facing. After I join the worker node to the Kamaji-managed control plane, the
kube-proxy
config map inkube-system
namespace isn't configured correctly forkube-proxy
pods to start.Here's my control plane definition:
After issuing
kubeadm token create --print-join-command
and joining the worker using the generated command, I see that the config map looks like this:It seems that
.data["config.conf"].clusterCIDR
remains empty. This prevents kube-proxy from starting:kubectl get pods -n kube-system -l 'k8s-app=kube-proxy'
I can fix this manually by doing something similar to:
clusterCIDR
to the required valuekubectl delete pods -n kube-system -l 'k8s-app=kube-proxy'
.This causes the
kube-proxy
to start but I see almost immediately that my change toclusterCIDR
gets wiped by, what I assume, the control plane mutating the config map back to an emptyclusterCIDR
.My questions:
config.conf
contents into the map?clusterCIDR
value persisted?Thank you!
Beta Was this translation helpful? Give feedback.
All reactions