Open
Description
My mariadb cluster exceeded the max connections and crashed. I did delete the pods to make them recreated but they couldn't start. Checking the init container init-config
I found those lines:
$ kubectl -n mysql logs -f mariadb-0 -c init-config
This is pod 0 (mariadb-0.mariadb.mysql.svc.cluster.local ) for statefulset mariadb.mysql.svc.cluster.local
This is the 1st statefulset pod. Checking if the statefulset is down ...
+ HOST_ID=0
++ dnsdomainname -d
+ STATEFULSET_SERVICE=mariadb.mysql.svc.cluster.local
++ dnsdomainname -A
+ POD_FQDN='mariadb-0.mariadb.mysql.svc.cluster.local '
+ echo 'This is pod 0 (mariadb-0.mariadb.mysql.svc.cluster.local ) for statefulset mariadb.mysql.svc.cluster.local'
+ '[' -z /data/db ']'
+ SUGGEST_EXEC_COMMAND='kubectl --namespace=mysql exec -c init-config mariadb-0 --'
+ [[ mariadb.mysql.svc.cluster.local = mariadb.* ]]
+ '[' 0 -eq 0 ']'
+ echo 'This is the 1st statefulset pod. Checking if the statefulset is down ...'
+ getent hosts mariadb
+ '[' 2 -eq 2 ']'
+ '[' '!' -d /data/db/mysql ']'
+ set +x
----- ACTION REQUIRED -----
No peers found, but data exists. To start in wsrep_new_cluster mode, run:
kubectl --namespace=mysql exec -c init-config mariadb-0 -- touch /tmp/confirm-new-cluster
Or to start in recovery mode, to see replication state, run:
kubectl --namespace=mysql exec -c init-config mariadb-0 -- touch /tmp/confirm-recover
Or to try a regular start (for example after recovery + manual intervention), run:
kubectl --namespace=mysql exec -c init-config mariadb-0 -- touch /tmp/confirm-resume
Waiting for response ...
So, I tried three of above options but no luck. The new pods are always CrashLoopBackOff
Any suggestion would be very appreciated.
Metadata
Metadata
Assignees
Labels
No labels