Description
Terraform Version
Terraform v1.1.0
on linux_amd64
Terraform Configuration Files
terraform {
backend "kubernetes" {
config_path = "/home/lud0v1c/.kube/config"
secret_suffix = "state"
}
}
data "terraform_remote_state" "this" {
backend = "kubernetes"
config = {
secret_suffix = "state"
load_config_file = true
config_path = "/home/lud0v1c/.kube/config"
}
}
Debug Output
https://gist.github.com/lud0v1c/5e655d1a4fae07c69a217665435b56d2
Expected Behavior
There's no state in the backend so it should create a new one.
Actual Behavior
Every operation fails due to another apparent terraform client doing an operation, like the debug output shows. This doesn't allow me to do anything, not init/plan/apply. force-unlock/state rm/pull don't work either.
Steps to Reproduce
terraform init
- Now that the k8s backend is configured, perform any operation like a plan, and kill it unsafely (like a double CTRL+C) while it's happening.
- Delete the state via
kubectl delete secret tfstate-default-state
. - Try to init again, either with a fresh new state
terraform init
or-migrate-state
Additional Context
I setup a k3s cluster some days ago, and yesterday I switched from local state storage to storing the state on the cluster itself.
Deployed the backend and terraform_remote_state without any problem. Everything was OK until an operation I was performing got killed due to network issues (an apply on my desktop Windows PC).
Knowing what this does and since no changes were made, I deleted the tfstate secret in the cluster. I can confirm there are no tfstate secrets in any namespace whatsoever.
Looking this up online, people mentioned that it could be another client/process but I looked at all local processes, and even tried initializing on my laptop (with the original failed operation computer shut down) but that also fails, always with the same error message.
I really can't understand from where this state/operation is being fetched, even rebooting my k8s nodes did nothing!