A simple Kubernetes operator that demonstrates managing Terraform resources via Kubernetes Jobs.
This is a learning-focused operator that manages a SimpleResource custom resource. When you create a SimpleResource, the operator:
- Renders a Terraform template with your spec
- Creates a Kubernetes Job that runs
terraform init && terraform apply - Waits for the Job to complete
- Extracts Terraform outputs and stores them in a Secret
- Updates the resource status
When you delete a SimpleResource, a finalizer ensures terraform destroy runs first.
This project solves a specific problem: running Terraform as a first-class citizen in Kubernetes, managed through the reconciliation loop. Instead of relying on CI/CD pipelines or external tools, the infrastructure becomes part of your cluster's declarative state.
- Simple to understand: Uses local Terraform providers (no cloud credentials needed)
- Full lifecycle management: Create → Update → Delete with finalizers
- Idempotent: Tracks spec changes via hash, only re-applies when needed
- Kubernetes-native: Outputs stored in Secrets, resources tracked as CRDs
- Observable: Full status reporting and phase tracking
- Go 1.22+
- Kind or any Kubernetes cluster
- kubectl
kind create cluster --name opr8rkubectl apply -f config/crd/bases/infra.opr8r_simpleresources.yamlgo run cmd/main.goThe operator will automatically create the terraform-executor ServiceAccount and RoleBinding in each namespace where you create SimpleResources.
In another terminal:
kubectl apply -f config/samples/simple_v1alpha1_simpleresource.yaml# Watch the resource
kubectl get simple -w
# Watch the Terraform jobs
kubectl get jobs -w
# Check the status
kubectl get simple simpleresource-sample -o yamlYou'll see:
- A Terraform Job created
- The Job runs terraform init and apply
- Status updates to "Ready"
- A random string generated and stored
# Get the secret with outputs
kubectl get secret simpleresource-sample-simple-outputs -o yaml
# View job logs
kubectl logs -l infra.opr8r/simpleresource=simpleresource-sampleUser creates SimpleResource
↓
Controller detects new resource
↓
Render Terraform template
↓
Create Kubernetes Job (with 2 containers)
↓
Job Pod starts
├─ Container 1: Runs terraform init && apply, writes outputs
└─ Container 2: Waits for terraform, creates Secret with outputs
↓
Controller waits for Job completion
↓
Extract outputs from Secret created by sidecar
↓
Parse Terraform outputs to map
↓
Update resource status (phase, randomValue, etc.)
↓
Create/update user-facing Secret with outputs
↓
Requeue for periodic reconciliation
apiVersion: infra.opr8r/v1alpha1
kind: SimpleResource
metadata:
name: my-resource
spec:
length: 16 # Length of random string to generate
prefix: "test" # Optional prefix
vars: # Optional Terraform variables
special: "true"
upper: "true"This generates:
- A random string of specified length
- A local file with the output
- Outputs stored in a Kubernetes Secret
The operator updates the resource status through these phases:
- Pending: Initial state
- Applying: Running terraform apply
- Ready: Successfully applied
- Error: Something failed
- Destroying: Running terraform destroy
opr8r/
├── api/v1alpha1/ # CRD definitions
│ ├── simpleresource_types.go
│ └── groupversion_info.go
├── internal/controller/ # Controller logic
│ └── simpleresource_controller.go
├── pkg/terraform/ # Terraform helpers
│ ├── executor.go # Orchestrates workflow
│ ├── renderer.go # Template rendering
│ ├── job_runner.go # Job management
│ └── outputs.go # Output parsing
├── modules/simple/ # Terraform template
│ └── main.tf.tpl
├── config/ # Kubernetes manifests
│ ├── crd/ # CRD YAML
│ ├── rbac/ # RBAC rules
│ ├── manager/ # Deployment
│ └── samples/ # Example resources
└── cmd/main.go # Entry point
# Format code
make fmt
# Run linter
make vet
# Build binary
make build
# Build Docker image
make docker-build IMG=your-registry/opr8r:tagModify the resource to trigger a re-apply:
kubectl patch simple simpleresource-sample --type='json' \
-p='[{"op": "replace", "path": "/spec/length", "value": 20}]'The operator will:
- Detect the spec hash changed
- Create a new Terraform Job
- Apply the changes
- Update status with new outputs
kubectl delete simple simpleresource-sampleThe operator will:
- Detect deletion timestamp
- Update status to "Destroying"
- Create a terraform destroy Job
- Wait for completion
- Remove the finalizer
- Allow Kubernetes to delete the resource
for i in {1..3}; do
cat <<EOF | kubectl apply -f -
apiVersion: infra.opr8r/v1alpha1
kind: SimpleResource
metadata:
name: test-$i
spec:
length: $((i * 5))
EOF
doneWatch them all reconcile in parallel.
The controller implements three main functions:
- Reconcile: Entry point, routes to normal or delete reconciliation
- reconcileNormal: Handles create/update logic
- reconcileDelete: Handles deletion with finalizer
Key patterns:
- Finalizers prevent deletion until cleanup completes
- Hash tracking enables idempotent reconciliation
- Phase tracking provides observability
Executor: High-level orchestration
Apply(): Render → Run Job → Parse outputsDestroy(): Render → Run destroy Job → Cleanup
Renderer: Template processing
- Reads
.tplfile - Substitutes values from spec
- Writes rendered
main.tf
JobRunner: Kubernetes Job management
- Creates ConfigMap with Terraform files
- Builds Job spec with sidecar pattern (see below)
- Waits for completion
- Retrieves outputs from Secret created by sidecar
Output Extraction: Uses a sidecar container pattern
- Main container runs Terraform and writes outputs to shared volume
- Sidecar container waits for completion, then creates Secret with outputs
- Controller reads outputs from Secret
Uses Go template syntax to inject values:
resource "random_string" "value" {
length = {{ .Length }}
}The Kubernetes cluster can't pull hashicorp/terraform:1.7.0. Ensure Docker/containerd has internet access.
Check RBAC permissions:
kubectl describe clusterrole opr8r-manager-roleCheck the Job logs:
kubectl logs -l infra.opr8r/simpleresource=<name>The Job might have failed. Check:
kubectl get jobs
kubectl describe job <job-name>Apache License 2.0
I retain all copyright to my contributions to this repository.