From 418e9a0ed525c67dead52807a39b5010e687e333 Mon Sep 17 00:00:00 2001 From: Adam Overa Date: Fri, 18 Jul 2025 15:03:45 -0400 Subject: [PATCH 1/4] Migrating Kubernetes Workloads to Linode Kubernetes Engine (LKE) Using Velero --- .../index.md | 770 ++++++++++++++++++ 1 file changed, 770 insertions(+) create mode 100644 docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md diff --git a/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md new file mode 100644 index 00000000000..1ff3ff65ef8 --- /dev/null +++ b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md @@ -0,0 +1,770 @@ +--- +slug: migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero +title: "Migrating Kubernetes Workloads to Linode Kubernetes Engine (LKE) Using Velero" +description: "Two to three sentences describing your guide." +authors: ["Akamai"] +contributors: ["Akamai"] +published: 2025-07-18 +keywords: ['list','of','keywords','and key phrases'] +license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' +external_resources: +- '[Link Title 1](http://www.example.com)' +- '[Link Title 2](http://www.example.net)' +--- + +Migrating a Kubernetes cluster has several use cases, including disaster recovery (for example, when your primary Kubernetes provider suffers an incident) or the need to change providers for feature or cost reasons. + +Performing this migration safely requires taking a complete snapshot of all the resources in the source cluster and then restoring that snapshot on the target cluster. After snapshot restoration, all external traffic is pointed to the new cluster, and the old cluster (if it can be accessed) is shut down. + +Deploying Kubernetes resources can be straightforward if you have a solid CI/CD pipeline in place. However, there may be reasons why you can't simply point your CI/CD pipeline to the new cluster to handle the migration of all resources, including: + +* Your CI/CD pipeline itself may be running in the source cluster and could be inaccessible. +* Some resources—like secrets—are provisioned using different processes, separate from CI/CD. +* Your persistent data volumes contain important data that can't be copied over using your CI/CD pipeline. + +In scenarios such as these, DevOps engineers may depend on Velero. + +### What is Velero? + +[**Velero**](https://velero.io/) is an open-source, Kubernetes-native tool for backing up and restoring Kubernetes resources and persistent volumes. It supports backup of core resources, namespaces, deployments, services, ConfigMaps, Secrets, and customer resource definitions (CRDs). It integrates with different storage backends—such AWS S3 or Linode Object Storage—for storing and restoring backups. + +This guide will walk through the process of using Velero to migrate a Kubernetes cluster with persistent volumes to Linode Kubernetes Engine (LKE). The focus of the guide will be on backing up and restoring a persistent data volume. For other aspects—such as adapting load balancing and DNS switching after the restore—refer to the Akamai Cloud guides on migrating to LKE (from [AWS EKS](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/), [Google GKE](https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/), [Azure AKS](https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/), or [Oracle OKE](https://www.linode.com/docs/guides/migrating-from-oracle-kubernetes-engine-to-linode-kubernetes-engine-lke/)). + +Although what's shown in this guide will start with an AWS EKS cluster as an example, the same process can apply to most Kubernetes providers. + +## Before You Begin + +1. Follow Akamai's [Getting Started](https://techdocs.akamai.com/cloud-computing/docs/getting-started) guide, and create an Akamai Cloud account if you do not already have one. +2. Create a personal access token using the instructions in the [Manage personal access tokens](https://techdocs.akamai.com/cloud-computing/docs/manage-personal-access-tokens) guide. +3. Install the Linode CLI using the instructions in the [Install and configure the CLI](https://techdocs.akamai.com/cloud-computing/docs/install-and-configure-the-cli) guide. +4. Follow the steps in the \_*Install* \`*kubectl*\`\_ section of the [Getting started with LKE](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#install-kubectl) guide to install and configure \`kubectl\`. +5. If migrating a cluster from AWS, ensure that you have access to your AWS account with sufficient permissions to work with EKS clusters. +6. Install and configure the [AWS CLI](https://aws.amazon.com/cli/) and \`[eksctl](https://eksctl.io/)\`. The command line tooling you use may vary if migrating a cluster from another provider. +7. Install \`[jq](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/docs/guides/using-jq-to-process-json-on-the-command-line/#install-jq-with-package-managers)\`. +8. Install the \`[velero](https://velero.io/docs/v1.3.0/velero-install/)\` [CLI](https://velero.io/docs/v1.3.0/velero-install/). + +## Downtime During the Migration + +The migration process shown in this guide will involve some downtime. Keep in mind the following considerations during the migration: + +* Double capacity might be required, so be aware of your usage quotas and limits. +* Both clusters (if available) might run concurrently for a period of time. +* Data will need to be read from and written to both clusters to keep them in sync. Appropriate read/write permissions must be in place. +* Incrementally by workloads, access to the source cluster will become read-only and eventually removed. +* Unified observability across both clusters may be beneficial. +* If problems occur on the new cluster, you will need the ability to roll back any workload. + +## Prepare the Source Cluster for Velero Usage + +The starting point for this guide is an AWS EKS cluster that has already been provisioned in AWS’s \`us-west-2\` region. Before installing and using Velero, take the following steps to prepare your source cluster. + +1. **Associate the EKS cluster with an OIDC provider**: Enables Kubernetes service accounts to securely assume AWS IAM roles. +2. **Provision EBS CSI support in the cluster**: Allows Kubernetes to dynamically provision and manage EBS volumes. +3. **Create a \`StorageClass\` using the EBS CSI provisioner**: Defines the provisioning behavior for EBS-backed volumes when persistent volume claims are made in the cluster. +4. **Create an S3 bucket for storing Velero backups**: Sets up the location for Velero to save and retrieve backup data and snapshots. +5. **Set up IAM credentials for Velero to use S3**: Grants Velero the necessary permissions to access the S3 bucket for backup and restore operations. + +With these pieces in place, you'll be ready to install Velero with the necessary permissions and infrastructure to back up workloads—including persistent volume data—from the EKS cluster to S3. + +### Associate the cluster with an OIDC provider + +An OIDC provider is required to enable [IAM roles for service accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), which is the recommended way for Velero to authenticate to AWS services like S3. + +\`\`\`command {title="Set initial environment variables for terminal session"} + +| export AWS\_PROFILE='INSERT YOUR AWS PROFILE' export EKS\_CLUSTER="my-source-k8s-cluster" export REGION="us-west-2" export ACCOUNT\_ID=$(aws sts get-caller-identity \--query Account \--output text) | +| :---- | + +\`\`\` + +[Create the OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) with the following command: + +\`\`\`command {title="Create OIDC provider"} + +| eksctl utils associate-iam-oidc-provider \\ \--cluster "$EKS\_CLUSTER" \\ \--region "$REGION" \\ \--approve | +| :---- | + +\`\`\` + +\`\`\`output + +| 2025-05-31 11:51:46 \[ℹ\] will create IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" 2025-05-31 11:51:47 \[✔\] created IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" | +| :---- | + +\`\`\` + +Verify that OIDC creation was successful. + +\`\`\`command {title="Verify successful OIDC creation"} + +| aws eks describe-cluster \\ \--name "$EKS\_CLUSTER" \\ \--region "$REGION" \\ \--query "cluster.identity.oidc.issuer" \\ \--output text | +| :---- | + +\`\`\` + +\`\`\`output + +| https://oidc.eks.us-west-2.amazonaws.com/id/50167EE12C1795D19075628E119 | +| :---- | + +\`\`\` + +Capture the last part of the output string with the OIDC provider ID and store it as an environment variable: + +\`\`\`command {title="Store OIDC provider id as environment variable"} + +| export OIDC\_ID=50167EE12C1795D19075628E119 | +| :---- | + +\`\`\` + +### Provision EBS CSI support in the cluster + +The CSI provisioner is a plugin that allows Kubernetes to create and manage storage volumes—like EBS disks—on demand, whenever a \`PersistentVolumeClaim\` (PVC) is made. Provisioning EBS CSI support requires a few steps. + +Create an IAM role for the EBS CSI driver with the trust policy for OIDC. + +\`\`\`command {title="Create IAM role for EBS CSI driver"} + +| aws iam create-role \\ \--role-name AmazonEKS\_EBS\_CSI\_DriverRole \\ \--assume-role-policy-document "{ \\"Version\\": \\"2012-10-17\\", \\"Statement\\": \[ { \\"Effect\\": \\"Allow\\", \\"Principal\\": { \\"Federated\\": \\"arn:aws:iam::${ACCOUNT\_ID}:oidc-provider/oidc.eks.${REGION}.amazonaws.com/id/${OIDC\_ID}\\" }, \\"Action\\": \\"sts:AssumeRoleWithWebIdentity\\", \\"Condition\\": { \\"StringEquals\\": { \\"oidc.eks.${REGION}.amazonaws.com/id/${OIDC\_ID}:sub\\": \\"system:serviceaccount:kube-system:ebs-csi-controller-sa\\" } } } \] }" | +| :---- | + +\`\`\` + +Attach the \`[AmazonEBSCSIDriverPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEBSCSIDriverPolicy.html)\` policy to the role. + +\`\`\`command {title="Attach policy to EBS CSI Driver role"} + +| aws iam attach-role-policy \\ \--role-name AmazonEKS\_EBS\_CSI\_DriverRole \\ \--policy-arn \\ arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy | +| :---- | + +\`\`\` + +Install the CSI provisioner for EBS volumes. + +\`\`\`command {title="Install CSI provisioner for EBS"} + +| aws eks create-addon \\ \--cluster-name "$EKS\_CLUSTER" \\ \--addon-name aws-ebs-csi-driver \\ \--service-account-role-arn \\ "arn:aws:iam::${ACCOUNT\_ID}:role/AmazonEKS\_EBS\_CSI\_DriverRole" \\ \--region "$REGION" | +| :---- | + +\`\`\` + +Wait for the EBS CSI driver to become active. + +\`\`\`command {title="Wait for EBS CSI driver to become active"} + +| until \[\[ "$(aws eks describe-addon \\ \--cluster-name "$EKS\_CLUSTER" \\ \--addon-name aws-ebs-csi-driver \\ \--region "$REGION" \\ \--query 'addon.status' \\ \--output text)" \= "ACTIVE" \]\]; do echo "Waiting for aws-ebs-csi-driver to become ACTIVE…" sleep 10 done echo "EBS CSI driver is ACTIVE." | +| :---- | + +\`\`\` + +\`\`\`output + +| Waiting for aws-ebs-csi-driver to become ACTIVE… Waiting for aws-ebs-csi-driver to become ACTIVE… Waiting for aws-ebs-csi-driver to become ACTIVE… EBS CSI driver is ACTIVE. | +| :---- | + +\`\`\` + +### Create a \`StorageClass\` + +Use the EBS CSI provisioner to create a \`[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/)\`. + +\`\`\`command {title="Create a StorageClass"} + +| echo ' apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs-sc provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete' | kubectl apply \-f \- | +| :---- | + +\`\`\` + +### Create an S3 bucket + +Create the S3 bucket where Velero can store its backups. + +\`\`\`command {title="Add the BUCKET\_NAME environment variable to the terminal session"} + +| export BUCKET\_NAME=velero-backup-7777 | +| :---- | + +\`\`\` + +\`\`\`command {title="Create S3 bucket"} + +| aws s3api create-bucket \\ \--bucket "$BUCKET\_NAME" \\ \--region "$REGION" \\ \--create-bucket-configuration LocationConstraint="$REGION" | +| :---- | + +\`\`\` + +\`\`\`output + +| { "Location": "http://velero-backup-7777.s3.amazonaws.com/" } | +| :---- | + +\`\`\` + +The bucket should not be public. Only Velero should access it. + +\`\`\`command {title="Block public access to S3 bucket"} + +| aws s3api put-public-access-block \\ \--bucket "$BUCKET\_NAME" \\ \--public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true | +| :---- | + +\`\`\` + +### Set up IAM credentials for Velero to use S3 + +To give Velero access to the S3 bucket, begin by creating the IAM policy. + +\`\`\`command {title="Create IAM policy for Velero to access S3, then echo policy ARN"} + +| POLICY\_ARN=$(aws iam create-policy \\ \--policy-name VeleroS3AccessPolicy \\ \--policy-document "{ \\"Version\\": \\"2012-10-17\\", \\"Statement\\": \[ { \\"Sid\\": \\"ListAndGetBucket\\", \\"Effect\\": \\"Allow\\", \\"Action\\": \[ \\"s3:ListBucket\\", \\"s3:GetBucketLocation\\" \], \\"Resource\\": \\"arn:aws:s3:::$BUCKET\_NAME\\" }, { \\"Sid\\": \\"CRUDonObjects\\", \\"Effect\\": \\"Allow\\", \\"Action\\": \[ \\"s3:PutObject\\", \\"s3:GetObject\\", \\"s3:DeleteObject\\" \], \\"Resource\\": \\"arn:aws:s3:::$BUCKET\_NAME/\*\\" } \] }" \\ \--query 'Policy.Arn' \--output text) echo $POLICY\_ARN | +| :---- | + +\`\`\` + +\`\`\`output + +| arn:aws:iam::431966127852:policy/VeleroS3AccessPolicy | +| :---- | + +\`\`\` + +Create the Velero user and attach the policy. + +\`\`\`command {title="Create Velero user and attach policy"} + +| aws iam create-user \--user-name velero aws iam attach-user-policy \\ \--user-name velero \\ \--policy-arn "$POLICY\_ARN" | +| :---- | + +\`\`\` + +\`\`\`output + +| { "User": { "Path": "/", "UserName": "velero", "UserId": "AIDAWE6V6YHZ6334NZZ3Z", "Arn": "arn:aws:iam::431966127852:user/velero", "CreateDate": "2025-05-31T07:03:40+00:00" } } | +| :---- | + +\`\`\` + +The \`velero\` IAM user now has access to the bucket. Create a credentials file for Velero to use. + +\`\`\`command {title="Create credentials file"} + +| CREDENTIALS\_FILE=\~/aws-credentials-velero aws iam create-access-key \--user-name velero \--query 'AccessKey.\[AccessKeyId,SecretAccessKey\]' \--output text | \\ awk \-v OUT="$CREDENTIALS\_FILE" ' { print "\[default\]" \> OUT; print "aws\_access\_key\_id \= "$1 \>\> OUT; print "aws\_secret\_access\_key \= "$2 \>\> OUT; }' | +| :---- | + +\`\`\` + +Verify the credentials file was created successfully. + +## Install and Configure Velero on Source Cluster + +With the source cluster properly prepared, you can install Velero on the EKS cluster, configured with the S3 backup location and credentials file that authorizes access to the bucket. + +\`\`\`command {title="Install Velero on source cluster"} + +| velero install \\ \--provider aws \\ \--plugins velero/velero-plugin-for-aws:v1.12.0 \\ \--bucket "$BUCKET\_NAME" \\ \--secret-file $CREDENTIALS\_FILE \\ \--backup-location-config region=$REGION \\ \--use-node-agent \\ \--use-volume-snapshots=false \\ \--default-volumes-to-fs-backup | +| :---- | + +\`\`\` + +\`\`\`output + +| CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client CustomResourceDefinition/backuprepositories.velero.io: created CustomResourceDefinition/backups.velero.io: attempting to create resource CustomResourceDefinition/backups.velero.io: attempting to create resource client CustomResourceDefinition/backups.velero.io: created CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client CustomResourceDefinition/backupstoragelocations.velero.io: created CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client CustomResourceDefinition/deletebackuprequests.velero.io: created CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client CustomResourceDefinition/downloadrequests.velero.io: created CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client CustomResourceDefinition/podvolumebackups.velero.io: created CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client CustomResourceDefinition/podvolumerestores.velero.io: created CustomResourceDefinition/restores.velero.io: attempting to create resource CustomResourceDefinition/restores.velero.io: attempting to create resource client CustomResourceDefinition/restores.velero.io: created CustomResourceDefinition/schedules.velero.io: attempting to create resource CustomResourceDefinition/schedules.velero.io: attempting to create resource client CustomResourceDefinition/schedules.velero.io: created CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client CustomResourceDefinition/serverstatusrequests.velero.io: created CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client CustomResourceDefinition/volumesnapshotlocations.velero.io: created CustomResourceDefinition/datadownloads.velero.io: attempting to create resource CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client CustomResourceDefinition/datadownloads.velero.io: created CustomResourceDefinition/datauploads.velero.io: attempting to create resource CustomResourceDefinition/datauploads.velero.io: attempting to create resource client CustomResourceDefinition/datauploads.velero.io: created Waiting for resources to be ready in cluster... Namespace/velero: attempting to create resource Namespace/velero: attempting to create resource client Namespace/velero: created ClusterRoleBinding/velero: attempting to create resource ClusterRoleBinding/velero: attempting to create resource client ClusterRoleBinding/velero: created ServiceAccount/velero: attempting to create resource ServiceAccount/velero: attempting to create resource client ServiceAccount/velero: created Secret/cloud-credentials: attempting to create resource Secret/cloud-credentials: attempting to create resource client Secret/cloud-credentials: created BackupStorageLocation/default: attempting to create resource BackupStorageLocation/default: attempting to create resource client BackupStorageLocation/default: created Deployment/velero: attempting to create resource Deployment/velero: attempting to create resource client Deployment/velero: created DaemonSet/node-agent: attempting to create resource DaemonSet/node-agent: attempting to create resource client DaemonSet/node-agent: created Velero is installed\! ⛵ Use 'kubectl logs deployment/velero \-n velero' to view the status. | +| :---- | + +\`\`\` + +To perform its full range of tasks, Velero creates its own namespace, several CRDs, a deployment, a service, and a node agent. Verify the Velero installation. + +\`\`\`command {title="Check Velero version"} + +| velero version | +| :---- | + +\`\`\` + +\`\`\`output + +| Client: Version: v1.16.1 Git commit: \- Server: Version: v1.16.1 | +| :---- | + +\`\`\` + +Check the pods in the \`velero\` namespace. + +\`\`\`command {title="Get pods in Velero namespace"} + +| kubectl get pods \-n velero | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME READY STATUS RESTARTS AGE node-agent-chnzw 1/1 Running 0 59s node-agent-ffqlg 1/1 Running 0 59s velero-6f4546949d-kjtnv 1/1 Running 0 59s | +| :---- | + +\`\`\` + +Verify the backup location configured for Velero. + +\`\`\`command {title="Get backup location for Velero"} + +| velero backup-location get | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT default aws velero-backup-7777 Available 2025-05-31 10:12:12 \+0300 IDT ReadWrite true | +| :---- | + +\`\`\` + +## Create a PersistentVolumeClaim in Source Cluster + +In Kubernetes, the PersistentVolumeClaim (PVC) is the mechanism for creating persistent volumes that can be mounted to pods in the cluster. Create the PVC in the source cluster. + +\`\`\`command {title="Create PersistentVolumeClaim"} + +| echo ' apiVersion: v1 kind: PersistentVolumeClaim metadata: name: the-pvc spec: accessModes: \- ReadWriteOnce storageClassName: ebs-sc resources: requests: storage: 1Mi ' | kubectl \-n default apply \-f \- | +| :---- | + +\`\`\` + +Note that this command uses the \`StorageClass\` named \`ebs-sc\`, which was created earlier. + +\`\`\`output + +| persistentvolumeclaim/the-pvc created | +| :---- | + +\`\`\` + +Verify the PVC was created successfully. + +\`\`\`command {title="Get PVC"} + +| kubectl get pvc \-n default | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE the-pvc Pending ebs-sc \ 9s | +| :---- | + +\`\`\` + +Its status should be \`Pending\`. This is by design, as the status remains \`Pending\` until the first consumer uses it. + +## Run a Pod to Use the PVC and Write Data + +Once a pod mounts a volume backed by the PVC, a corresponding persistent volume (in this example, backed by AWS EBS) will be created. Run a pod to mount the volume with the following command: + +\`\`\`command {title="Run a pod to mount the PVC-backed volume"} + +| kubectl run the-pod \\ \--image=bash:latest \\ \--restart=Never \\ \-it \\ \--overrides=' { "apiVersion": "v1", "spec": { "volumes": \[ { "name": "the-vol", "persistentVolumeClaim": { "claimName": "the-pvc" } } \], "containers": \[ { "name": "the-container", "image": "bash:latest", "command": \["bash"\], "stdin": true, "tty": true, "volumeMounts": \[ { "mountPath": "/data", "name": "the-vol" } \] } \] } }' \\ \-- bash | +| :---- | + +\`\`\` + +From the open bash shell, write sample data into the volume. + +\`\`\`command {title="Use pod's bash shell to write sample data"} + +| bash-5.2\# echo "Some data" \> /data/some-data.txt bash-5.2\# cat /data/some-data.txt | +| :---- | + +\`\`\` + +\`\`\`output + +| Some data | +| :---- | + +\`\`\` + +## Create a Velero Backup, then Verify + +With Velero installed and the persistent volume in place, run the backup command: + +\`\`\`command {title="Use Velero to create a backup"} + +| velero backup create test-backup \--wait | +| :---- | + +\`\`\` + +\`\`\`output + +| Backup request "test-backup" submitted successfully. Waiting for backup to complete. You may safely press ctrl-c to stop waiting \- your backup will continue in the background. ............................................................. Backup completed with status: Completed. You may check for more information using the commands \`velero backup describe test-backup\` and \`velero backup logs test-backup\`. | +| :---- | + +\`\`\` + +After the backup process has completed, use the \`backup describe\` command to confirm a successful backup: + +\`\`\`command {title="Describe the backup"} + +| velero backup describe test-backup | +| :---- | + +\`\`\` + +\`\`\`output + +| Name: test-backup Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.32.5-eks-5d4a308 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=32 Phase: Completed Namespaces: Included: \* Excluded: \ Resources: Included: \* Excluded: \ Cluster-scoped: auto Label selector: \ Or label selector: \ Storage Location: default Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: \ Backup Format Version: 1.1.0 Started: 2025-05-31 21:44:31 \+0300 IDT Completed: 2025-05-31 21:45:33 \+0300 IDT Expiration: 2025-06-30 21:44:31 \+0300 IDT Total items to be backed up: 454 Items backed up: 454 Backup Volumes: Velero-Native Snapshots: \ CSI Snapshots: \ Pod Volume Backups \- kopia (specify \--details for more information): Completed: 11 HooksAttempted: 0 HooksFailed: 0 | +| :---- | + +The critical information to verify is the Kopia item for pod volume backups toward the end of the output. Note in the above example that it says \`Completed: 11\`. This verifies the presence of backups. + +## Verify Backup in S3 + +To close the loop, verify that the backup data has made its way to the configured S3 bucket. + +\`\`\`command {title="List contents of test backup"} + +| s3cmd ls s3://$BUCKET\_NAME/backups/test-backup/ | +| :---- | + +\`\`\` + +\`\`\`output + +| 2025-05-31 21:45:34 29 test-backup-csi-volumesnapshotclasses.json.gz 2025-05-31 21:45:33 29 test-backup-csi-volumesnapshotcontents.json.gz 2025-05-31 21:45:34 29 test-backup-csi-volumesnapshots.json.gz 2025-05-31 21:45:33 27 test-backup-itemoperations.json.gz 2025-05-31 21:45:33 23733 test-backup-logs.gz 2025-05-31 21:45:34 2481 test-backup-podvolumebackups.json.gz 2025-05-31 21:45:34 3022 test-backup-resource-list.json.gz 2025-05-31 21:45:34 49 test-backup-results.gz 2025-05-31 21:45:33 922 test-backup-volumeinfo.json.gz 2025-05-31 21:45:34 29 test-backup-volumesnapshots.json.gz 2025-05-31 21:45:33 138043 test-backup.tar.gz 2025-05-31 21:45:34 2981 velero-backup.json | +| :---- | + +\`\`\` + +## Provision LKE Cluster + +The persistent volume on your source cluster has been backed up using Velero. Now, provision your destination cluster on Akamai Cloud. There are several ways to create a Kubernetes cluster on Akamai Cloud. This guide uses the Linode CLI to provision resources. + +See the [LKE documentation](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) for instructions on how to provision a cluster using Cloud Manager. + +### See available Kubernetes versions + +Use the Linode CLI (linode-cli) to see available Kubernetes versions: + +\`\`\`command {title="List available Kubernetes versions"} + +| linode lke versions-list | +| :---- | + +\`\`\` + +\`\`\`output + +| ┌──────┐ │ id │ ├──────┤ │ 1.32 │ ├──────┤ │ 1.31 │ └──────┘ | +| :---- | + +\`\`\` + +Unless specific requirements dictate otherwise, it’s generally recommended to provision the latest version of Kubernetes. + +### Create a cluster + +Determine the type of Linode to provision. The examples in this guide use the g6-standard-2 Linode, which features two CPU cores and 4 GB of memory. Run the following command to create a cluster labeled \`velero-to-lke\` which uses the \`g6-standard-2\` Linode: + +\`\`\`command {title="Create LKE cluster"} + +| lin lke cluster-create \\ \--label velero-to-lke \\ \--k8s\_version 1.32 \\ \--region us-sea \\ \--node\_pools '\[{ "type": "g6-standard-2", "count": 1, "autoscaler": { "enabled": true, "min": 1, "max": 3 } }\]' | +| :---- | + +\`\`\` + +\`\`\`output + +| ┌────────┬───────────────┬────────┬─────────────┐ │ id │ label │ region │ k8s\_version │ ├────────┼───────────────┼────────┼─────────────┤ │ 463649 │ velero-to-lke │ us-sea │ 1.32 │ └────────┴───────────────┴────────┴─────────────┘ | +| :---- | + +\`\`\` + +### Access the cluster + +To access your cluster, fetch the cluster credentials as a \`kubeconfig\` file. Your cluster’s \`kubeconfig\` can also be [downloaded via the Cloud Manager](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#access-and-download-your-kubeconfig). Use the following command to retrieve the cluster’s ID: + +\`\`\`command {title="Retrieve cluster ID and set environment variable"} + +| CLUSTER\_ID=$(linode lke clusters-list \--json | \\ jq \-r '.\[\] | select(.label \== "velero-to-lke") | .id') | +| :---- | + +\`\`\` + +Retrieve the \`kubeconfig\` file and save it to \`\~/.kube/lke-config\`: +\`\`\`command {title="Retrieve and save kubeconfig file"} + +| linode lke kubeconfig-view \\ \--json "$CLUSTER\_ID" \\ | jq \-r '.\[0\].kubeconfig' \\ | base64 \--decode \> \~/.kube/lke-config | +| :---- | + +\`\`\` + +After saving the \`kubeconfig\`, access your cluster by using \`kubectl\` and specifying the file: + +\`\`\`command {title="Use kubectl with kubeconfig to get nodes"} + +| kubectl get nodes \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME STATUS ROLES AGE VERSION lke463649-678334-401dde8e0000 Ready \ 7m27s v1.32.1 | +| :---- | + +\`\`\` + +## Install Velero in LKE + +If you are working in a different terminal session, ensure you have the environment variables for \`BUCKET\_NAME\`, \`REGION\`, and \`CREDENTIALS\_FILE\` with values identical to those earlier in this guide. In case you need to set them again, the command will look similar to: + +\`\`\`command {title="Set environment variables"} + +| export BUCKET\_NAME=velero-backup-7777 export REGION=us-west-2 export CREDENTIALS\_FILE=\~/aws-credentials-velero | +| :---- | + +\`\`\` + +Run the following command to install Velero in your LKE cluster: + +\`\`\`command {title="Install Velero in LKE"} + +| velero install \\ \--kubeconfig \~/.kube/lke-config \\ \--provider aws \\ \--plugins velero/velero-plugin-for-aws:v1.12.0 \\ \--bucket "$BUCKET\_NAME" \\ \--secret-file $CREDENTIALS\_FILE \\ \--backup-location-config region=$REGION \\ \--use-node-agent \\ \--use-volume-snapshots=false \\ \--default-volumes-to-fs-backup | +| :---- | + +\`\`\` + +Verify the Velero installation: + +\`\`\`command {title="Verify the Velero installation"} + +| kubectl logs deployment/velero \\ \-n velero \\ \--kubeconfig \~/.kube/lke-config \\ | grep 'BackupStorageLocations is valid' | +| :---- | + +\`\`\` + +\`\`\`output + +| Defaulted container "velero" out of: velero, velero-velero-plugin-for-aws (init) time="2025-05-31T20:52:50Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup\_storage\_location\_controller.go:128" | +| :---- | + +\`\`\` + +With the backup storage location properly configured, run this command to get information about existing backups. + +\`\`\`command {title="Get backups"} + +| velero backup get \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR test\-backup Completed 0 0 2025-05-31 21:44:31 \+0300 IDT 29d default \ | +| :---- | + +\`\`\` + +## Restore the Backup in LKE + +Now, use Velero to restore your source cluster backup into your destination cluster at LKE. + +\`\`\`command {title="Use Velero to restore a backup"} + +| velero restore create test\-restore \\ \--from-backup test\-backup \\ \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| Restore request "test-restore" submitted successfully. Run \`velero restore describe test\-restore\` or \`velero restore logs test\-restore\` for more details. | +| :---- | + +\`\`\` + +Check the restore status with the following command: +\`\`\`command {title="Check restore status"} + +| velero restore describe test\-restore \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +## Post-Restore Adjustments + +Because you are transitioning from one Kubernetes provider to another, you may need to make some final post-restore adjustments. + +For example, if your destination cluster is at LKE, you will want to update your PVC to use the Linode storage class. Review the Linode CSI drivers with the following command: + +\`\`\`command {title="See current CSI drivers"} + +| kubectl get csidrivers \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE ebs.csi.aws.com true false false \ false Persistent 22m efs.csi.aws.com false false false \ false Persistent 22m linodebs.csi.linode.com true true false \ false Persistent 69m | +| :---- | + +\`\`\` + +Review the available storage classes: + +\`\`\`command {title="Review available storage classes"} + +| kubectl get storageclass \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ebs-sc ebs.csi.aws.com Delete WaitForFirstConsumer true 6h22m gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 6h22m linode-block-storage linodebs.csi.linode.com Delete Immediate true 7h9m linode-block-storage-retain (default) linodebs.csi.linode.com Retain Immediate true 7h9m | +| :---- | + +\`\`\` + +Use the default \`linode-block-storage-retain\` storage class. However, you must first delete the restored PVC and recreate it with the new storage class. + +\`\`\`command {title="Delete the restored PVC"} + +| kubectl delete pvc the-pvc \--kubeconfig \~/.kube/lke-config persistentvolumeclaim "the-pvc" deleted | +| :---- | + +\`\`\` + +\`\`\`command {title="Recreate the PVC with the new storage class"} + +| echo ' apiVersion: v1 kind: PersistentVolumeClaim metadata: name: the-pvc spec: accessModes: \- ReadWriteOnce resources: requests: storage: 1Mi ' | kubectl apply \-f \- \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| persistentvolumeclaim/the-pvc created | +| :---- | + +\`\`\` + +The new PVC is bound to a new persistent volume. Run the following command to see this: + +\`\`\`command {title="Get information about PVC, PV, and pod"} + +| kubectl get pvc,pv,pod \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/the-pvc Bound pvc-711d050fae7641ee 10Gi RWO linode-block-storage-retain \ 2m12s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE persistentvolume/pvc-711d050fae7641ee 10Gi RWO Retain Bound default/the-pvc linode-block-storage-retain \ 2m9s NAME READY STATUS RESTARTS AGE pod/the-pod 0/1 Init:0/1 0 6h38m | +| :---- | + +\`\`\` + +Unfortunately, you'll see that the pod is in an \`Init\` state as it is trying to bind to the previous (and now invalid) PVC. You need to delete the pod, stop the blocked restore (by first deleting the finalizer), and re-run the restore. + +\`\`\`command {title="Delete pod and stop the blocked restore"} + +| kubectl delete pod the-pod \--kubeconfig \~/.kube/lke-config kubectl patch restore test\-restore \\ \--patch '{"metadata":{"finalizers":\[\]}}' \\ \--type merge \\ \-n velero \\ \--kubeconfig \~/.kube/lke-config kubectl delete restore test-restore \\ \-n velero \\ \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +Now, re-run the restore. Velero is smart enough to detect that the PVC (called \`the-pvc\`) exists and will not overwrite it unless explicitly requested to do so. + +\`\`\`command {title="Re-run the Velero restore"} + +| velero restore create test\-restore \\ \--from-backup test\-backup \\ \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| Restore request "test-restore" submitted successfully. Run \`velero restore describe test\-restore\` or \`velero restore logs test\-restore\` for more details. | +| :---- | + +\`\`\` + +Verify your pod was restored. + +\`\`\`command {title="Verify successful pod restore"} + +| kubectl get pod the-pod \--kubeconfig \~/.kube/lke-config | +| :---- | + +\`\`\` + +\`\`\`output + +| NAME READY STATUS RESTARTS AGE the-pod 1/1 Running 0 118s | +| :---- | + +\`\`\` + +The pod is \`Running\`. Now, verify the volume is mounted and you can access on LKE the data that was written to the EBS volume on AWS. + +\`\`\`command {title="Run the pod and show the sample data that was written"} + +| kubectl exec the-pod \--kubeconfig \~/.kube/lke-config \-- cat /data/some-data.txt | +| :---- | + +\`\`\` + +\`\`\`output + +| Defaulted container "the-container" out of: the-container, restore-wait (init) Some data | +| :---- | + +\`\`\` +You have successfully performed an end-to-end backup and restore of a Kubernetes cluster (in this example, on AWS EKS) to a Linode LKE cluster, and this included persistent data migration across two different cloud object storage systems. + +## Final Considerations + +As you pursue this kind of migration, keep in mind the following important considerations. + +### Persistent data movements modes + +Velero supports both CSI snapshots as well as file system backup using Kopia. When restoring from a backup into a cluster of the same Kubernetes provider, it is recommended to use Velero's [CSI snapshots mode](https://velero.io/docs/main/csi/). This takes advantage of the Kubernetes CSI volume snapshots API and only requires that the same CSI driver is installed in the source and destination clusters. + +The file system backup mode used in this walkthrough is the best option when the source and destination Kubernetes providers are incompatible. + +### ConfigMaps, secrets, and certificates + +Secrets and certificates are often tied to the cloud provider. Velero will restore any Kubernetes Secret resource. However, if (for example) the Secret is used to access AWS services that were replaced by equivalent LKE services, then it would be unnecessary to migrate them. The same applies to ConfigMaps that may contain cloud-provider specific configuration. + +### Downtime planning + +Velero doesn't offer any special capabilities for facilitating zero-downtime migrations. A safe backup and restore will require blocking all or most traffic to the cluster. If you restore from a stale backup, then you either lose data or you will need to backfill data from the old cluster later. + +When downtime is unavoidable, then a safer approach is to schedule it. Perform a backup and immediately restore it to the new cluster. + +### Other use case: backups for multi-cloud architectures + +While this guide focuses on migration, Velero can also support a multi-cloud Kubernetes strategy. By configuring Velero with backup locations across multiple cloud providers, you could: + +* Create a resilient disaster recovery setup by backup up workloads from one cluster and restoring them into another in a different cloud +* Enable workload portability between environments, which may be helpful for hybrid deployments or to meet data redundancy requirements for compliance reasons. + +The resources below are provided to help you become familiar with Velero when migrating your Kubernetes cluster to Linode LKE. + +## \#\# Additional Resources + +\- Velero + \- \[Documentation Home\]([https://velero.io/docs/v1.16/](https://velero.io/docs/v1.16/)) + \- \[Installing the Velero CLI\](https://velero.io/docs/v1.16/basic-install/\#install-the-cli) + \- \[Storage provider plugins\]([https://velero.io/docs/v1.16/supported-providers/](https://velero.io/docs/v1.16/supported-providers/)) +\- Akamai Cloud + \- \[Linode LKE\]([https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine](https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine)) + \- \[Migrating from AWS EKS to Linode Kubernetes Engine (LKE)\]([https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/)) + \- \[Migrating from Azure AKS to Linode Kubernetes Engine (LKE)\]([https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/)) + \- \[Migrating from Google GKE to Linode Kubernetes Engine (LKE)\]([https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/)) \ No newline at end of file From 9f496701a4b2f86140ab921bd8ed31fca77b4833 Mon Sep 17 00:00:00 2001 From: Adam Overa Date: Fri, 18 Jul 2025 18:12:56 -0400 Subject: [PATCH 2/4] Layout Edit 1 --- ci/vale/dictionary.txt | 3 + .../index.md | 1226 ++++++++++------- 2 files changed, 694 insertions(+), 535 deletions(-) diff --git a/ci/vale/dictionary.txt b/ci/vale/dictionary.txt index bdbe730db67..c0308c10455 100644 --- a/ci/vale/dictionary.txt +++ b/ci/vale/dictionary.txt @@ -772,6 +772,7 @@ filesystems filezilla filimonov findtime +finalizer finnix fintech firefart @@ -1269,6 +1270,7 @@ Kompose Konqueror konsole konversation +kopia kotin kourier KPI @@ -2766,6 +2768,7 @@ vdev vdevs ve veeam +velero venv ver veracode diff --git a/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md index 1ff3ff65ef8..3b3db77c0bf 100644 --- a/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md +++ b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md @@ -18,15 +18,15 @@ Performing this migration safely requires taking a complete snapshot of all the Deploying Kubernetes resources can be straightforward if you have a solid CI/CD pipeline in place. However, there may be reasons why you can't simply point your CI/CD pipeline to the new cluster to handle the migration of all resources, including: -* Your CI/CD pipeline itself may be running in the source cluster and could be inaccessible. -* Some resources—like secrets—are provisioned using different processes, separate from CI/CD. -* Your persistent data volumes contain important data that can't be copied over using your CI/CD pipeline. +- Your CI/CD pipeline itself may be running in the source cluster and could be inaccessible. +- Some resources—like secrets—are provisioned using different processes, separate from CI/CD. +- Your persistent data volumes contain important data that can't be copied over using your CI/CD pipeline. In scenarios such as these, DevOps engineers may depend on Velero. -### What is Velero? +### What Is Velero? -[**Velero**](https://velero.io/) is an open-source, Kubernetes-native tool for backing up and restoring Kubernetes resources and persistent volumes. It supports backup of core resources, namespaces, deployments, services, ConfigMaps, Secrets, and customer resource definitions (CRDs). It integrates with different storage backends—such AWS S3 or Linode Object Storage—for storing and restoring backups. +[**Velero**](https://velero.io/) is an open source, Kubernetes-native tool for backing up and restoring Kubernetes resources and persistent volumes. It supports backup of core resources, namespaces, deployments, services, ConfigMaps, Secrets, and customer resource definitions (CRDs). It integrates with different storage backends—such AWS S3 or Linode Object Storage—for storing and restoring backups. This guide will walk through the process of using Velero to migrate a Kubernetes cluster with persistent volumes to Linode Kubernetes Engine (LKE). The focus of the guide will be on backing up and restoring a persistent data volume. For other aspects—such as adapting load balancing and DNS switching after the restore—refer to the Akamai Cloud guides on migrating to LKE (from [AWS EKS](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/), [Google GKE](https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/), [Azure AKS](https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/), or [Oracle OKE](https://www.linode.com/docs/guides/migrating-from-oracle-kubernetes-engine-to-linode-kubernetes-engine-lke/)). @@ -34,224 +34,275 @@ Although what's shown in this guide will start with an AWS EKS cluster as an exa ## Before You Begin -1. Follow Akamai's [Getting Started](https://techdocs.akamai.com/cloud-computing/docs/getting-started) guide, and create an Akamai Cloud account if you do not already have one. -2. Create a personal access token using the instructions in the [Manage personal access tokens](https://techdocs.akamai.com/cloud-computing/docs/manage-personal-access-tokens) guide. -3. Install the Linode CLI using the instructions in the [Install and configure the CLI](https://techdocs.akamai.com/cloud-computing/docs/install-and-configure-the-cli) guide. -4. Follow the steps in the \_*Install* \`*kubectl*\`\_ section of the [Getting started with LKE](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#install-kubectl) guide to install and configure \`kubectl\`. -5. If migrating a cluster from AWS, ensure that you have access to your AWS account with sufficient permissions to work with EKS clusters. -6. Install and configure the [AWS CLI](https://aws.amazon.com/cli/) and \`[eksctl](https://eksctl.io/)\`. The command line tooling you use may vary if migrating a cluster from another provider. -7. Install \`[jq](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/docs/guides/using-jq-to-process-json-on-the-command-line/#install-jq-with-package-managers)\`. -8. Install the \`[velero](https://velero.io/docs/v1.3.0/velero-install/)\` [CLI](https://velero.io/docs/v1.3.0/velero-install/). +1. Follow Akamai's [Getting Started](https://techdocs.akamai.com/cloud-computing/docs/getting-started) guide, and create an Akamai Cloud account if you do not already have one. +1. Create a personal access token using the instructions in the [Manage personal access tokens](https://techdocs.akamai.com/cloud-computing/docs/manage-personal-access-tokens) guide. +1. Install the Linode CLI using the instructions in the [Install and configure the CLI](https://techdocs.akamai.com/cloud-computing/docs/install-and-configure-the-cli) guide. +1. Follow the steps in the _*Install* `*kubectl*`_ section of the [Getting started with LKE](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#install-kubectl) guide to install and configure `kubectl`. +1. If migrating a cluster from AWS, ensure that you have access to your AWS account with sufficient permissions to work with EKS clusters. +1. Install and configure the [AWS CLI](https://aws.amazon.com/cli/) and [`eksctl`](https://eksctl.io/). The command line tooling you use may vary if migrating a cluster from another provider. +1. Install `[jq](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/docs/guides/using-jq-to-process-json-on-the-command-line/#install-jq-with-package-managers)`. +1. Install the `[velero](https://velero.io/docs/v1.3.0/velero-install/)` [CLI](https://velero.io/docs/v1.3.0/velero-install/). ## Downtime During the Migration The migration process shown in this guide will involve some downtime. Keep in mind the following considerations during the migration: -* Double capacity might be required, so be aware of your usage quotas and limits. -* Both clusters (if available) might run concurrently for a period of time. -* Data will need to be read from and written to both clusters to keep them in sync. Appropriate read/write permissions must be in place. -* Incrementally by workloads, access to the source cluster will become read-only and eventually removed. -* Unified observability across both clusters may be beneficial. -* If problems occur on the new cluster, you will need the ability to roll back any workload. +- Double capacity might be required, so be aware of your usage quotas and limits. +- Both clusters (if available) might run concurrently for a period of time. +- Data will need to be read from and written to both clusters to keep them in sync. Appropriate read/write permissions must be in place. +- Incrementally by workloads, access to the source cluster will become read-only and eventually removed. +- Unified observability across both clusters may be beneficial. +- If problems occur on the new cluster, you will need the ability to roll back any workload. ## Prepare the Source Cluster for Velero Usage -The starting point for this guide is an AWS EKS cluster that has already been provisioned in AWS’s \`us-west-2\` region. Before installing and using Velero, take the following steps to prepare your source cluster. +The starting point for this guide is an AWS EKS cluster that has already been provisioned in AWS’s `us-west-2` region. Before installing and using Velero, take the following steps to prepare your source cluster. -1. **Associate the EKS cluster with an OIDC provider**: Enables Kubernetes service accounts to securely assume AWS IAM roles. -2. **Provision EBS CSI support in the cluster**: Allows Kubernetes to dynamically provision and manage EBS volumes. -3. **Create a \`StorageClass\` using the EBS CSI provisioner**: Defines the provisioning behavior for EBS-backed volumes when persistent volume claims are made in the cluster. -4. **Create an S3 bucket for storing Velero backups**: Sets up the location for Velero to save and retrieve backup data and snapshots. -5. **Set up IAM credentials for Velero to use S3**: Grants Velero the necessary permissions to access the S3 bucket for backup and restore operations. +1. **Associate the EKS cluster with an OIDC provider**: Enables Kubernetes service accounts to securely assume AWS IAM roles. +1. **Provision EBS CSI support in the cluster**: Allows Kubernetes to dynamically provision and manage EBS volumes. +1. **Create a `StorageClass` using the EBS CSI provisioner**: Defines the provisioning behavior for EBS-backed volumes when persistent volume claims are made in the cluster. +1. **Create an S3 bucket for storing Velero backups**: Sets up the location for Velero to save and retrieve backup data and snapshots. +1. **Set up IAM credentials for Velero to use S3**: Grants Velero the necessary permissions to access the S3 bucket for backup and restore operations. With these pieces in place, you'll be ready to install Velero with the necessary permissions and infrastructure to back up workloads—including persistent volume data—from the EKS cluster to S3. -### Associate the cluster with an OIDC provider +### Associate the Cluster with an OIDC Provider An OIDC provider is required to enable [IAM roles for service accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), which is the recommended way for Velero to authenticate to AWS services like S3. -\`\`\`command {title="Set initial environment variables for terminal session"} - -| export AWS\_PROFILE='INSERT YOUR AWS PROFILE' export EKS\_CLUSTER="my-source-k8s-cluster" export REGION="us-west-2" export ACCOUNT\_ID=$(aws sts get-caller-identity \--query Account \--output text) | -| :---- | - -\`\`\` +```command {title="Set initial environment variables for terminal session"} +export AWS_PROFILE='INSERT YOUR AWS PROFILE' +export EKS_CLUSTER="my-source-k8s-cluster" +export REGION="us-west-2" +export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +``` [Create the OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) with the following command: -\`\`\`command {title="Create OIDC provider"} - -| eksctl utils associate-iam-oidc-provider \\ \--cluster "$EKS\_CLUSTER" \\ \--region "$REGION" \\ \--approve | -| :---- | - -\`\`\` - -\`\`\`output - -| 2025-05-31 11:51:46 \[ℹ\] will create IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" 2025-05-31 11:51:47 \[✔\] created IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" | -| :---- | +```command {title="Create OIDC provider"} +eksctl utils associate-iam-oidc-provider \ + --cluster "$EKS_CLUSTER" \ + --region "$REGION" \ + --approve +``` -\`\`\` +```output +2025-05-31 11:51:46 [ℹ] will create IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" +2025-05-31 11:51:47 [✔] created IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" +``` Verify that OIDC creation was successful. -\`\`\`command {title="Verify successful OIDC creation"} +```command {title="Verify successful OIDC creation"} +aws eks describe-cluster \ + --name "$EKS_CLUSTER" \ + --region "$REGION" \ + --query "cluster.identity.oidc.issuer" \ + --output text +``` -| aws eks describe-cluster \\ \--name "$EKS\_CLUSTER" \\ \--region "$REGION" \\ \--query "cluster.identity.oidc.issuer" \\ \--output text | -| :---- | - -\`\`\` - -\`\`\`output - -| https://oidc.eks.us-west-2.amazonaws.com/id/50167EE12C1795D19075628E119 | -| :---- | - -\`\`\` +```output +https://oidc.eks.us-west-2.amazonaws.com/id/50167EE12C1795D19075628E119 +``` Capture the last part of the output string with the OIDC provider ID and store it as an environment variable: -\`\`\`command {title="Store OIDC provider id as environment variable"} - -| export OIDC\_ID=50167EE12C1795D19075628E119 | -| :---- | +```command {title="Store OIDC provider id as environment variable"} +export OIDC_ID=50167EE12C1795D19075628E119 +``` -\`\`\` +### Provision EBS CSI Support in the Cluster -### Provision EBS CSI support in the cluster - -The CSI provisioner is a plugin that allows Kubernetes to create and manage storage volumes—like EBS disks—on demand, whenever a \`PersistentVolumeClaim\` (PVC) is made. Provisioning EBS CSI support requires a few steps. +The CSI provisioner is a plugin that allows Kubernetes to create and manage storage volumes—like EBS disks—on demand, whenever a `PersistentVolumeClaim` (PVC) is made. Provisioning EBS CSI support requires a few steps. Create an IAM role for the EBS CSI driver with the trust policy for OIDC. -\`\`\`command {title="Create IAM role for EBS CSI driver"} - -| aws iam create-role \\ \--role-name AmazonEKS\_EBS\_CSI\_DriverRole \\ \--assume-role-policy-document "{ \\"Version\\": \\"2012-10-17\\", \\"Statement\\": \[ { \\"Effect\\": \\"Allow\\", \\"Principal\\": { \\"Federated\\": \\"arn:aws:iam::${ACCOUNT\_ID}:oidc-provider/oidc.eks.${REGION}.amazonaws.com/id/${OIDC\_ID}\\" }, \\"Action\\": \\"sts:AssumeRoleWithWebIdentity\\", \\"Condition\\": { \\"StringEquals\\": { \\"oidc.eks.${REGION}.amazonaws.com/id/${OIDC\_ID}:sub\\": \\"system:serviceaccount:kube-system:ebs-csi-controller-sa\\" } } } \] }" | -| :---- | - -\`\`\` - -Attach the \`[AmazonEBSCSIDriverPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEBSCSIDriverPolicy.html)\` policy to the role. - -\`\`\`command {title="Attach policy to EBS CSI Driver role"} - -| aws iam attach-role-policy \\ \--role-name AmazonEKS\_EBS\_CSI\_DriverRole \\ \--policy-arn \\ arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy | -| :---- | - -\`\`\` +```command {title="Create IAM role for EBS CSI driver"} +aws iam create-role \ + --role-name AmazonEKS_EBS_CSI_DriverRole \ + --assume-role-policy-document "{ + \"Version\": \"2012-10-17\", + \"Statement\": [ + { + \"Effect\": \"Allow\", + \"Principal\": { + \"Federated\": \"arn:aws:iam::${ACCOUNT_ID}:oidc-provider/oidc.eks.${REGION}.amazonaws.com/id/${OIDC_ID}\" + }, + \"Action\": \"sts:AssumeRoleWithWebIdentity\", + \"Condition\": { + \"StringEquals\": { + \"oidc.eks.${REGION}.amazonaws.com/id/${OIDC_ID}:sub\": \"system:serviceaccount:kube-system:ebs-csi-controller-sa\" + } + } + } + ] + }" +``` + +Attach the `[AmazonEBSCSIDriverPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEBSCSIDriverPolicy.html)` policy to the role. + +```command {title="Attach policy to EBS CSI Driver role"} +aws iam attach-role-policy \ + --role-name AmazonEKS_EBS_CSI_DriverRole \ + --policy-arn \ + arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy +``` Install the CSI provisioner for EBS volumes. -\`\`\`command {title="Install CSI provisioner for EBS"} - -| aws eks create-addon \\ \--cluster-name "$EKS\_CLUSTER" \\ \--addon-name aws-ebs-csi-driver \\ \--service-account-role-arn \\ "arn:aws:iam::${ACCOUNT\_ID}:role/AmazonEKS\_EBS\_CSI\_DriverRole" \\ \--region "$REGION" | -| :---- | - -\`\`\` +```command {title="Install CSI provisioner for EBS"} +aws eks create-addon \ + --cluster-name "$EKS_CLUSTER" \ + --addon-name aws-ebs-csi-driver \ + --service-account-role-arn + "arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole" \ + --region "$REGION" +``` Wait for the EBS CSI driver to become active. -\`\`\`command {title="Wait for EBS CSI driver to become active"} - -| until \[\[ "$(aws eks describe-addon \\ \--cluster-name "$EKS\_CLUSTER" \\ \--addon-name aws-ebs-csi-driver \\ \--region "$REGION" \\ \--query 'addon.status' \\ \--output text)" \= "ACTIVE" \]\]; do echo "Waiting for aws-ebs-csi-driver to become ACTIVE…" sleep 10 done echo "EBS CSI driver is ACTIVE." | -| :---- | - -\`\`\` - -\`\`\`output - -| Waiting for aws-ebs-csi-driver to become ACTIVE… Waiting for aws-ebs-csi-driver to become ACTIVE… Waiting for aws-ebs-csi-driver to become ACTIVE… EBS CSI driver is ACTIVE. | -| :---- | - -\`\`\` - -### Create a \`StorageClass\` - -Use the EBS CSI provisioner to create a \`[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/)\`. - -\`\`\`command {title="Create a StorageClass"} - -| echo ' apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs-sc provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete' | kubectl apply \-f \- | -| :---- | - -\`\`\` - -### Create an S3 bucket +```command {title="Wait for EBS CSI driver to become active"} +until [[ "$(aws eks describe-addon \ + --cluster-name "$EKS_CLUSTER" \ + --addon-name aws-ebs-csi-driver \ + --region "$REGION" \ + --query 'addon.status' \ + --output text)" \= "ACTIVE" ]]; do + echo "Waiting for aws-ebs-csi-driver to become ACTIVE…" + sleep 10 +done +echo "EBS CSI driver is ACTIVE." +``` + +```output +Waiting for aws-ebs-csi-driver to become ACTIVE… +Waiting for aws-ebs-csi-driver to become ACTIVE… +Waiting for aws-ebs-csi-driver to become ACTIVE… +EBS CSI driver is ACTIVE. +``` + +### Create a `StorageClass` + +Use the EBS CSI provisioner to create a `[StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/)`. + +```command {title="Create a StorageClass"} +echo ' +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: ebs-sc +provisioner: ebs.csi.aws.com +volumeBindingMode: WaitForFirstConsumer +allowVolumeExpansion: true +reclaimPolicy: Delete' | kubectl apply -f - +``` + +### Create an S3 Bucket Create the S3 bucket where Velero can store its backups. -\`\`\`command {title="Add the BUCKET\_NAME environment variable to the terminal session"} - -| export BUCKET\_NAME=velero-backup-7777 | -| :---- | - -\`\`\` +```command {title="Add the BUCKET_NAME environment variable to the terminal session"} +export BUCKET_NAME=velero-backup-7777 +``` -\`\`\`command {title="Create S3 bucket"} +```command {title="Create S3 bucket"} +aws s3api create-bucket \ + --bucket "$BUCKET_NAME" \ + --region "$REGION" \ + --create-bucket-configuration LocationConstraint="$REGION" +``` -| aws s3api create-bucket \\ \--bucket "$BUCKET\_NAME" \\ \--region "$REGION" \\ \--create-bucket-configuration LocationConstraint="$REGION" | -| :---- | - -\`\`\` - -\`\`\`output - -| { "Location": "http://velero-backup-7777.s3.amazonaws.com/" } | -| :---- | - -\`\`\` +```output +{ + "Location": "http://velero-backup-7777.s3.amazonaws.com/" +} +``` The bucket should not be public. Only Velero should access it. -\`\`\`command {title="Block public access to S3 bucket"} - -| aws s3api put-public-access-block \\ \--bucket "$BUCKET\_NAME" \\ \--public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true | -| :---- | +```command {title="Block public access to S3 bucket"} +aws s3api put-public-access-block \ + --bucket "$BUCKET_NAME" \ + --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true +``` -\`\`\` - -### Set up IAM credentials for Velero to use S3 +### Set up IAM Credentials for Velero to Use S3 To give Velero access to the S3 bucket, begin by creating the IAM policy. -\`\`\`command {title="Create IAM policy for Velero to access S3, then echo policy ARN"} - -| POLICY\_ARN=$(aws iam create-policy \\ \--policy-name VeleroS3AccessPolicy \\ \--policy-document "{ \\"Version\\": \\"2012-10-17\\", \\"Statement\\": \[ { \\"Sid\\": \\"ListAndGetBucket\\", \\"Effect\\": \\"Allow\\", \\"Action\\": \[ \\"s3:ListBucket\\", \\"s3:GetBucketLocation\\" \], \\"Resource\\": \\"arn:aws:s3:::$BUCKET\_NAME\\" }, { \\"Sid\\": \\"CRUDonObjects\\", \\"Effect\\": \\"Allow\\", \\"Action\\": \[ \\"s3:PutObject\\", \\"s3:GetObject\\", \\"s3:DeleteObject\\" \], \\"Resource\\": \\"arn:aws:s3:::$BUCKET\_NAME/\*\\" } \] }" \\ \--query 'Policy.Arn' \--output text) echo $POLICY\_ARN | -| :---- | - -\`\`\` - -\`\`\`output - -| arn:aws:iam::431966127852:policy/VeleroS3AccessPolicy | -| :---- | - -\`\`\` +```command {title="Create IAM policy for Velero to access S3, then echo policy ARN"} +POLICY_ARN=$(aws iam create-policy \ + --policy-name VeleroS3AccessPolicy \ + --policy-document "{ + \"Version\": \"2012-10-17\", + \"Statement\": [ + { + \"Sid\": \"ListAndGetBucket\", + \"Effect\": \"Allow\", + \"Action\": [ + \"s3:ListBucket\", + \"s3:GetBucketLocation\" + ], + \"Resource\": \"arn:aws:s3:::$BUCKET_NAME\" + }, + { + \"Sid\": \"CRUDonObjects\", + \"Effect\": \"Allow\", + \"Action\": [ + \"s3:PutObject\", + \"s3:GetObject\", + \"s3:DeleteObject\" + ], + \"Resource\": \"arn:aws:s3:::$BUCKET_NAME/*\" + } + ] + }" \ + --query 'Policy.Arn' --output text) echo $POLICY_ARN +``` + +```output +arn:aws:iam::431966127852:policy/VeleroS3AccessPolicy +``` Create the Velero user and attach the policy. -\`\`\`command {title="Create Velero user and attach policy"} - -| aws iam create-user \--user-name velero aws iam attach-user-policy \\ \--user-name velero \\ \--policy-arn "$POLICY\_ARN" | -| :---- | - -\`\`\` - -\`\`\`output - -| { "User": { "Path": "/", "UserName": "velero", "UserId": "AIDAWE6V6YHZ6334NZZ3Z", "Arn": "arn:aws:iam::431966127852:user/velero", "CreateDate": "2025-05-31T07:03:40+00:00" } } | -| :---- | - -\`\`\` - -The \`velero\` IAM user now has access to the bucket. Create a credentials file for Velero to use. - -\`\`\`command {title="Create credentials file"} - -| CREDENTIALS\_FILE=\~/aws-credentials-velero aws iam create-access-key \--user-name velero \--query 'AccessKey.\[AccessKeyId,SecretAccessKey\]' \--output text | \\ awk \-v OUT="$CREDENTIALS\_FILE" ' { print "\[default\]" \> OUT; print "aws\_access\_key\_id \= "$1 \>\> OUT; print "aws\_secret\_access\_key \= "$2 \>\> OUT; }' | -| :---- | - -\`\`\` +```command {title="Create Velero user and attach policy"} +aws iam create-user \ + --user-name velero + +aws iam attach-user-policy \ + --user-name velero \ + --policy-arn "$POLICY_ARN" +``` + +```output +{ + "User": { + "Path": "/", + "UserName": "velero", + "UserId": "AIDAWE6V6YHZ6334NZZ3Z", + "Arn": "arn:aws:iam::431966127852:user/velero", + "CreateDate": "2025-05-31T07:03:40+00:00" + } +} +``` + +The `velero` IAM user now has access to the bucket. Create a credentials file for Velero to use. + +```command {title="Create credentials file"} +CREDENTIALS_FILE=\~/aws-credentials-velero + +aws iam create-access-key \ + --user-name velero + --query 'AccessKey.[AccessKeyId,SecretAccessKey]' \ + --output text | \ + awk -v OUT="$CREDENTIALS_FILE" ' + { + print "[default]" > OUT; + print "aws_access_key_id = "$1 >> OUT; + print "aws_secret_access_key = "$2 >> OUT; + }' +``` Verify the credentials file was created successfully. @@ -259,184 +310,298 @@ Verify the credentials file was created successfully. With the source cluster properly prepared, you can install Velero on the EKS cluster, configured with the S3 backup location and credentials file that authorizes access to the bucket. -\`\`\`command {title="Install Velero on source cluster"} - -| velero install \\ \--provider aws \\ \--plugins velero/velero-plugin-for-aws:v1.12.0 \\ \--bucket "$BUCKET\_NAME" \\ \--secret-file $CREDENTIALS\_FILE \\ \--backup-location-config region=$REGION \\ \--use-node-agent \\ \--use-volume-snapshots=false \\ \--default-volumes-to-fs-backup | -| :---- | - -\`\`\` - -\`\`\`output - -| CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client CustomResourceDefinition/backuprepositories.velero.io: created CustomResourceDefinition/backups.velero.io: attempting to create resource CustomResourceDefinition/backups.velero.io: attempting to create resource client CustomResourceDefinition/backups.velero.io: created CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client CustomResourceDefinition/backupstoragelocations.velero.io: created CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client CustomResourceDefinition/deletebackuprequests.velero.io: created CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client CustomResourceDefinition/downloadrequests.velero.io: created CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client CustomResourceDefinition/podvolumebackups.velero.io: created CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client CustomResourceDefinition/podvolumerestores.velero.io: created CustomResourceDefinition/restores.velero.io: attempting to create resource CustomResourceDefinition/restores.velero.io: attempting to create resource client CustomResourceDefinition/restores.velero.io: created CustomResourceDefinition/schedules.velero.io: attempting to create resource CustomResourceDefinition/schedules.velero.io: attempting to create resource client CustomResourceDefinition/schedules.velero.io: created CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client CustomResourceDefinition/serverstatusrequests.velero.io: created CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client CustomResourceDefinition/volumesnapshotlocations.velero.io: created CustomResourceDefinition/datadownloads.velero.io: attempting to create resource CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client CustomResourceDefinition/datadownloads.velero.io: created CustomResourceDefinition/datauploads.velero.io: attempting to create resource CustomResourceDefinition/datauploads.velero.io: attempting to create resource client CustomResourceDefinition/datauploads.velero.io: created Waiting for resources to be ready in cluster... Namespace/velero: attempting to create resource Namespace/velero: attempting to create resource client Namespace/velero: created ClusterRoleBinding/velero: attempting to create resource ClusterRoleBinding/velero: attempting to create resource client ClusterRoleBinding/velero: created ServiceAccount/velero: attempting to create resource ServiceAccount/velero: attempting to create resource client ServiceAccount/velero: created Secret/cloud-credentials: attempting to create resource Secret/cloud-credentials: attempting to create resource client Secret/cloud-credentials: created BackupStorageLocation/default: attempting to create resource BackupStorageLocation/default: attempting to create resource client BackupStorageLocation/default: created Deployment/velero: attempting to create resource Deployment/velero: attempting to create resource client Deployment/velero: created DaemonSet/node-agent: attempting to create resource DaemonSet/node-agent: attempting to create resource client DaemonSet/node-agent: created Velero is installed\! ⛵ Use 'kubectl logs deployment/velero \-n velero' to view the status. | -| :---- | - -\`\`\` +```command {title="Install Velero on source cluster"} +velero install \ + --provider aws \ + --plugins velero/velero-plugin-for-aws:v1.12.0 \ + --bucket "$BUCKET_NAME" \ + --secret-file $CREDENTIALS_FILE \ + --backup-location-config region=$REGION \ + --use-node-agent \ + --use-volume-snapshots=false \ + --default-volumes-to-fs-backup +``` + +```output +CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource +CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client +CustomResourceDefinition/backuprepositories.velero.io: created +CustomResourceDefinition/backups.velero.io: attempting to create resource +CustomResourceDefinition/backups.velero.io: attempting to create resource client +CustomResourceDefinition/backups.velero.io: created +CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource +CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client +CustomResourceDefinition/backupstoragelocations.velero.io: created +CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource +CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client +CustomResourceDefinition/deletebackuprequests.velero.io: created +CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource +CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client +CustomResourceDefinition/downloadrequests.velero.io: created +CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource +CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client +CustomResourceDefinition/podvolumebackups.velero.io: created +CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource +CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client +CustomResourceDefinition/podvolumerestores.velero.io: created +CustomResourceDefinition/restores.velero.io: attempting to create resource +CustomResourceDefinition/restores.velero.io: attempting to create resource client +CustomResourceDefinition/restores.velero.io: created +CustomResourceDefinition/schedules.velero.io: attempting to create resource +CustomResourceDefinition/schedules.velero.io: attempting to create resource client +CustomResourceDefinition/schedules.velero.io: created +CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource +CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client +CustomResourceDefinition/serverstatusrequests.velero.io: created +CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource +CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client +CustomResourceDefinition/volumesnapshotlocations.velero.io: created +CustomResourceDefinition/datadownloads.velero.io: attempting to create resource +CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client +CustomResourceDefinition/datadownloads.velero.io: created +CustomResourceDefinition/datauploads.velero.io: attempting to create resource +CustomResourceDefinition/datauploads.velero.io: attempting to create resource client +CustomResourceDefinition/datauploads.velero.io: created +Waiting for resources to be ready in cluster... +Namespace/velero: attempting to create resource +Namespace/velero: attempting to create resource client +Namespace/velero: created +ClusterRoleBinding/velero: attempting to create resource +ClusterRoleBinding/velero: attempting to create resource client +ClusterRoleBinding/velero: created +ServiceAccount/velero: attempting to create resource +ServiceAccount/velero: attempting to create resource client +ServiceAccount/velero: created +Secret/cloud-credentials: attempting to create resource +Secret/cloud-credentials: attempting to create resource client +Secret/cloud-credentials: created +BackupStorageLocation/default: attempting to create resource +BackupStorageLocation/default: attempting to create resource client +BackupStorageLocation/default: created +Deployment/velero: attempting to create resource +Deployment/velero: attempting to create resource client +Deployment/velero: created +DaemonSet/node-agent: attempting to create resource +DaemonSet/node-agent: attempting to create resource client +DaemonSet/node-agent: created +Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status. +``` To perform its full range of tasks, Velero creates its own namespace, several CRDs, a deployment, a service, and a node agent. Verify the Velero installation. -\`\`\`command {title="Check Velero version"} - -| velero version | -| :---- | - -\`\`\` - -\`\`\`output - -| Client: Version: v1.16.1 Git commit: \- Server: Version: v1.16.1 | -| :---- | - -\`\`\` - -Check the pods in the \`velero\` namespace. +```command {title="Check Velero version"} +velero version +``` -\`\`\`command {title="Get pods in Velero namespace"} +```output +Client: + Version: v1.16.1 + Git commit: - +Server: + Version: v1.16.1 +``` -| kubectl get pods \-n velero | -| :---- | +Check the pods in the `velero` namespace. -\`\`\` +```command {title="Get pods in Velero namespace"} +kubectl get pods -n velero +``` -\`\`\`output - -| NAME READY STATUS RESTARTS AGE node-agent-chnzw 1/1 Running 0 59s node-agent-ffqlg 1/1 Running 0 59s velero-6f4546949d-kjtnv 1/1 Running 0 59s | -| :---- | - -\`\`\` +```output +NAME READY STATUS RESTARTS AGE +node-agent-chnzw 1/1 Running 0 59s +node-agent-ffqlg 1/1 Running 0 59s +velero-6f4546949d-kjtnv 1/1 Running 0 59s +``` Verify the backup location configured for Velero. -\`\`\`command {title="Get backup location for Velero"} - -| velero backup-location get | -| :---- | - -\`\`\` - -\`\`\`output - -| NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT default aws velero-backup-7777 Available 2025-05-31 10:12:12 \+0300 IDT ReadWrite true | -| :---- | +```command {title="Get backup location for Velero"} +velero backup-location get +``` -\`\`\` +```output +NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT +default aws velero-backup-7777 Available 2025-05-31 10:12:12 +0300 IDT ReadWrite true +``` ## Create a PersistentVolumeClaim in Source Cluster In Kubernetes, the PersistentVolumeClaim (PVC) is the mechanism for creating persistent volumes that can be mounted to pods in the cluster. Create the PVC in the source cluster. -\`\`\`command {title="Create PersistentVolumeClaim"} - -| echo ' apiVersion: v1 kind: PersistentVolumeClaim metadata: name: the-pvc spec: accessModes: \- ReadWriteOnce storageClassName: ebs-sc resources: requests: storage: 1Mi ' | kubectl \-n default apply \-f \- | -| :---- | - -\`\`\` - -Note that this command uses the \`StorageClass\` named \`ebs-sc\`, which was created earlier. - -\`\`\`output - -| persistentvolumeclaim/the-pvc created | -| :---- | - -\`\`\` +```command {title="Create PersistentVolumeClaim"} +echo ' +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: the-pvc +spec: + accessModes: + - ReadWriteOnce + storageClassName: ebs-sc + resources: + requests: + storage: 1Mi +' | kubectl -n default apply -f - +``` + +Note that this command uses the `StorageClass` named `ebs-sc`, which was created earlier. + +```output +persistentvolumeclaim/the-pvc created +``` Verify the PVC was created successfully. -\`\`\`command {title="Get PVC"} - -| kubectl get pvc \-n default | -| :---- | - -\`\`\` - -\`\`\`output - -| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE the-pvc Pending ebs-sc \ 9s | -| :---- | +```command {title="Get PVC"} +kubectl get pvc -n default +``` -\`\`\` +```output +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +the-pvc Pending ebs-sc 9s +``` -Its status should be \`Pending\`. This is by design, as the status remains \`Pending\` until the first consumer uses it. +Its status should be `Pending`. This is by design, as the status remains `Pending` until the first consumer uses it. ## Run a Pod to Use the PVC and Write Data Once a pod mounts a volume backed by the PVC, a corresponding persistent volume (in this example, backed by AWS EBS) will be created. Run a pod to mount the volume with the following command: -\`\`\`command {title="Run a pod to mount the PVC-backed volume"} - -| kubectl run the-pod \\ \--image=bash:latest \\ \--restart=Never \\ \-it \\ \--overrides=' { "apiVersion": "v1", "spec": { "volumes": \[ { "name": "the-vol", "persistentVolumeClaim": { "claimName": "the-pvc" } } \], "containers": \[ { "name": "the-container", "image": "bash:latest", "command": \["bash"\], "stdin": true, "tty": true, "volumeMounts": \[ { "mountPath": "/data", "name": "the-vol" } \] } \] } }' \\ \-- bash | -| :---- | - -\`\`\` +```command {title="Run a pod to mount the PVC-backed volume"} +kubectl run the-pod \ + --image=bash:latest \ + --restart=Never \ + -it \ + --overrides=' +{ + "apiVersion": "v1", + "spec": { + "volumes": [ + { + "name": "the-vol", + "persistentVolumeClaim": { + "claimName": "the-pvc" + } + } + ], + "containers": [ + { + "name": "the-container", + "image": "bash:latest", + "command": ["bash"], + "stdin": true, + "tty": true, + "volumeMounts": [ + { + "mountPath": "/data", + "name": "the-vol" + } + ] + } + ] + } +}' \ + -- bash +``` From the open bash shell, write sample data into the volume. -\`\`\`command {title="Use pod's bash shell to write sample data"} - -| bash-5.2\# echo "Some data" \> /data/some-data.txt bash-5.2\# cat /data/some-data.txt | -| :---- | - -\`\`\` - -\`\`\`output +```command {title="Use pod's bash shell to write sample data"} +echo "Some data" > /data/some-data.txt +cat /data/some-data.txt +``` -| Some data | -| :---- | +```output +Some data +``` -\`\`\` - -## Create a Velero Backup, then Verify +## Create a Velero Backup, Then Verify With Velero installed and the persistent volume in place, run the backup command: -\`\`\`command {title="Use Velero to create a backup"} - -| velero backup create test-backup \--wait | -| :---- | - -\`\`\` - -\`\`\`output - -| Backup request "test-backup" submitted successfully. Waiting for backup to complete. You may safely press ctrl-c to stop waiting \- your backup will continue in the background. ............................................................. Backup completed with status: Completed. You may check for more information using the commands \`velero backup describe test-backup\` and \`velero backup logs test-backup\`. | -| :---- | - -\`\`\` - -After the backup process has completed, use the \`backup describe\` command to confirm a successful backup: - -\`\`\`command {title="Describe the backup"} - -| velero backup describe test-backup | -| :---- | - -\`\`\` - -\`\`\`output - -| Name: test-backup Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.32.5-eks-5d4a308 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=32 Phase: Completed Namespaces: Included: \* Excluded: \ Resources: Included: \* Excluded: \ Cluster-scoped: auto Label selector: \ Or label selector: \ Storage Location: default Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: \ Backup Format Version: 1.1.0 Started: 2025-05-31 21:44:31 \+0300 IDT Completed: 2025-05-31 21:45:33 \+0300 IDT Expiration: 2025-06-30 21:44:31 \+0300 IDT Total items to be backed up: 454 Items backed up: 454 Backup Volumes: Velero-Native Snapshots: \ CSI Snapshots: \ Pod Volume Backups \- kopia (specify \--details for more information): Completed: 11 HooksAttempted: 0 HooksFailed: 0 | -| :---- | - -The critical information to verify is the Kopia item for pod volume backups toward the end of the output. Note in the above example that it says \`Completed: 11\`. This verifies the presence of backups. +```command {title="Use Velero to create a backup"} +elero backup create test-backup --wait +``` + +```output +Backup request "test-backup" submitted successfully. +Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. +............................................................. +Backup completed with status: Completed. You may check for more information using the commands `velero backup describe test-backup` and `velero backup logs test-backup`. +``` + +After the backup process has completed, use the `backup describe` command to confirm a successful backup: + +```command {title="Describe the backup"} +velero backup describe test-backup +``` + +```output +Name: test-backup +Namespace: velero +Labels: velero.io/storage-location=default +Annotations: velero.io/resource-timeout=10m0s + velero.io/source-cluster-k8s-gitversion=v1.32.5-eks-5d4a308 + velero.io/source-cluster-k8s-major-version=1 + velero.io/source-cluster-k8s-minor-version=32 +Phase: Completed +Namespaces: + Included: * + Excluded: +Resources: + Included: * + Excluded: + Cluster-scoped: auto +Label selector: +Or label selector: +Storage Location: default +Velero-Native Snapshot PVs: auto +Snapshot Move Data: false +Data Mover: velero +TTL: 720h0m0s +CSISnapshotTimeout: 10m0s +ItemOperationTimeout: 4h0m0s +Hooks: +Backup Format Version: 1.1.0 +Started: 2025-05-31 21:44:31 +0300 IDT +Completed: 2025-05-31 21:45:33 +0300 IDT +Expiration: 2025-06-30 21:44:31 +0300 IDT +Total items to be backed up: 454 +Items backed up: 454 +Backup Volumes: + Velero-Native Snapshots: + CSI Snapshots: + Pod Volume Backups - kopia (specify --details for more information): + Completed: 11 +HooksAttempted: 0 +HooksFailed: 0 +``` + +The critical information to verify is the Kopia item for pod volume backups toward the end of the output. Note in the above example that it says `Completed: 11`. This verifies the presence of backups. ## Verify Backup in S3 To close the loop, verify that the backup data has made its way to the configured S3 bucket. -\`\`\`command {title="List contents of test backup"} - -| s3cmd ls s3://$BUCKET\_NAME/backups/test-backup/ | -| :---- | - -\`\`\` - -\`\`\`output - -| 2025-05-31 21:45:34 29 test-backup-csi-volumesnapshotclasses.json.gz 2025-05-31 21:45:33 29 test-backup-csi-volumesnapshotcontents.json.gz 2025-05-31 21:45:34 29 test-backup-csi-volumesnapshots.json.gz 2025-05-31 21:45:33 27 test-backup-itemoperations.json.gz 2025-05-31 21:45:33 23733 test-backup-logs.gz 2025-05-31 21:45:34 2481 test-backup-podvolumebackups.json.gz 2025-05-31 21:45:34 3022 test-backup-resource-list.json.gz 2025-05-31 21:45:34 49 test-backup-results.gz 2025-05-31 21:45:33 922 test-backup-volumeinfo.json.gz 2025-05-31 21:45:34 29 test-backup-volumesnapshots.json.gz 2025-05-31 21:45:33 138043 test-backup.tar.gz 2025-05-31 21:45:34 2981 velero-backup.json | -| :---- | - -\`\`\` +```command {title="List contents of test backup"} +s3cmd ls s3://$BUCKET_NAME/backups/test-backup/ +``` + +```output +2025-05-31 21:45:34 29 test-backup-csi-volumesnapshotclasses.json.gz +2025-05-31 21:45:33 29 test-backup-csi-volumesnapshotcontents.json.gz +2025-05-31 21:45:34 29 test-backup-csi-volumesnapshots.json.gz +2025-05-31 21:45:33 27 test-backup-itemoperations.json.gz +2025-05-31 21:45:33 23733 test-backup-logs.gz +2025-05-31 21:45:34 2481 test-backup-podvolumebackups.json.gz +2025-05-31 21:45:34 3022 test-backup-resource-list.json.gz +2025-05-31 21:45:34 49 test-backup-results.gz +2025-05-31 21:45:33 922 test-backup-volumeinfo.json.gz +2025-05-31 21:45:34 29 test-backup-volumesnapshots.json.gz +2025-05-31 21:45:33 138043 test-backup.tar.gz +2025-05-31 21:45:34 2981 velero-backup.json +``` ## Provision LKE Cluster @@ -444,156 +609,153 @@ The persistent volume on your source cluster has been backed up using Velero. No See the [LKE documentation](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) for instructions on how to provision a cluster using Cloud Manager. -### See available Kubernetes versions - -Use the Linode CLI (linode-cli) to see available Kubernetes versions: - -\`\`\`command {title="List available Kubernetes versions"} - -| linode lke versions-list | -| :---- | - -\`\`\` +### See Available Kubernetes Versions -\`\`\`output +Use the Linode CLI (`linode-cli`) to see available Kubernetes versions: -| ┌──────┐ │ id │ ├──────┤ │ 1.32 │ ├──────┤ │ 1.31 │ └──────┘ | -| :---- | +```command {title="List available Kubernetes versions"} +linode lke versions-list +``` -\`\`\` +```output +┌──────┐ +│ id │ +├──────┤ +│ 1.32 │ +├──────┤ +│ 1.31 │ +└──────┘ +``` Unless specific requirements dictate otherwise, it’s generally recommended to provision the latest version of Kubernetes. -### Create a cluster - -Determine the type of Linode to provision. The examples in this guide use the g6-standard-2 Linode, which features two CPU cores and 4 GB of memory. Run the following command to create a cluster labeled \`velero-to-lke\` which uses the \`g6-standard-2\` Linode: - -\`\`\`command {title="Create LKE cluster"} - -| lin lke cluster-create \\ \--label velero-to-lke \\ \--k8s\_version 1.32 \\ \--region us-sea \\ \--node\_pools '\[{ "type": "g6-standard-2", "count": 1, "autoscaler": { "enabled": true, "min": 1, "max": 3 } }\]' | -| :---- | - -\`\`\` - -\`\`\`output - -| ┌────────┬───────────────┬────────┬─────────────┐ │ id │ label │ region │ k8s\_version │ ├────────┼───────────────┼────────┼─────────────┤ │ 463649 │ velero-to-lke │ us-sea │ 1.32 │ └────────┴───────────────┴────────┴─────────────┘ | -| :---- | - -\`\`\` +### Create a Cluster + +Determine the type of Linode to provision. The examples in this guide use the g6-standard-2 Linode, which features two CPU cores and 4 GB of memory. Run the following command to create a cluster labeled `velero-to-lke` which uses the `g6-standard-2` Linode: + +```command {title="Create LKE cluster"} +lin lke cluster-create \ + --label velero-to-lke \ + --k8s_version 1.32 \ + --region us-sea \ + --node_pools '[{ + "type": "g6-standard-2", + "count": 1, + "autoscaler": { + "enabled": true, + "min": 1, + "max": 3 + } + }]' +``` + +```output +┌────────┬───────────────┬────────┬─────────────┐ +│ id │ label │ region │ k8s_version │ +├────────┼───────────────┼────────┼─────────────┤ +│ 463649 │ velero-to-lke │ us-sea │ 1.32 │ +└────────┴───────────────┴────────┴─────────────┘ +``` ### Access the cluster -To access your cluster, fetch the cluster credentials as a \`kubeconfig\` file. Your cluster’s \`kubeconfig\` can also be [downloaded via the Cloud Manager](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#access-and-download-your-kubeconfig). Use the following command to retrieve the cluster’s ID: - -\`\`\`command {title="Retrieve cluster ID and set environment variable"} - -| CLUSTER\_ID=$(linode lke clusters-list \--json | \\ jq \-r '.\[\] | select(.label \== "velero-to-lke") | .id') | -| :---- | - -\`\`\` - -Retrieve the \`kubeconfig\` file and save it to \`\~/.kube/lke-config\`: -\`\`\`command {title="Retrieve and save kubeconfig file"} +To access your cluster, fetch the cluster credentials as a `kubeconfig` file. Your cluster’s `kubeconfig` can also be [downloaded via the Cloud Manager](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#access-and-download-your-kubeconfig). Use the following command to retrieve the cluster’s ID: -| linode lke kubeconfig-view \\ \--json "$CLUSTER\_ID" \\ | jq \-r '.\[0\].kubeconfig' \\ | base64 \--decode \> \~/.kube/lke-config | -| :---- | +```command {title="Retrieve cluster ID and set environment variable"} +CLUSTER_ID=$(linode lke clusters-list --json | \ + jq -r '.[] | select(.label == "velero-to-lke") | .id') +``` -\`\`\` +Retrieve the `kubeconfig` file and save it to `\~/.kube/lke-config`: -After saving the \`kubeconfig\`, access your cluster by using \`kubectl\` and specifying the file: +```command {title="Retrieve and save kubeconfig file"} +linode lke kubeconfig-view \ + --json "$CLUSTER_ID" \ + | jq -r '.[0].kubeconfig' \ + | base64 --decode > ~/.kube/lke-config +``` -\`\`\`command {title="Use kubectl with kubeconfig to get nodes"} +After saving the `kubeconfig`, access your cluster by using `kubectl` and specifying the file: -| kubectl get nodes \--kubeconfig \~/.kube/lke-config | -| :---- | +```command {title="Use kubectl with kubeconfig to get nodes"} +kubectl get nodes --kubeconfig ~/.kube/lke-config +``` -\`\`\` - -\`\`\`output - -| NAME STATUS ROLES AGE VERSION lke463649-678334-401dde8e0000 Ready \ 7m27s v1.32.1 | -| :---- | - -\`\`\` +```output +NAME STATUS ROLES AGE VERSION +lke463649-678334-401dde8e0000 Ready 7m27s v1.32.1 +``` ## Install Velero in LKE -If you are working in a different terminal session, ensure you have the environment variables for \`BUCKET\_NAME\`, \`REGION\`, and \`CREDENTIALS\_FILE\` with values identical to those earlier in this guide. In case you need to set them again, the command will look similar to: - -\`\`\`command {title="Set environment variables"} - -| export BUCKET\_NAME=velero-backup-7777 export REGION=us-west-2 export CREDENTIALS\_FILE=\~/aws-credentials-velero | -| :---- | +If you are working in a different terminal session, ensure you have the environment variables for `BUCKET_NAME`, `REGION`, and `CREDENTIALS_FILE` with values identical to those earlier in this guide. In case you need to set them again, the command will look similar to: -\`\`\` +```command {title="Set environment variables"} +export BUCKET_NAME=velero-backup-7777 +export REGION=us-west-2 +export CREDENTIALS_FILE=~/aws-credentials-velero +``` Run the following command to install Velero in your LKE cluster: -\`\`\`command {title="Install Velero in LKE"} - -| velero install \\ \--kubeconfig \~/.kube/lke-config \\ \--provider aws \\ \--plugins velero/velero-plugin-for-aws:v1.12.0 \\ \--bucket "$BUCKET\_NAME" \\ \--secret-file $CREDENTIALS\_FILE \\ \--backup-location-config region=$REGION \\ \--use-node-agent \\ \--use-volume-snapshots=false \\ \--default-volumes-to-fs-backup | -| :---- | - -\`\`\` +```command {title="Install Velero in LKE"} +velero install \ + --kubeconfig ~/.kube/lke-config \ + --provider aws \ + --plugins velero/velero-plugin-for-aws:v1.12.0 \ + --bucket "$BUCKET_NAME" \ + --secret-file $CREDENTIALS_FILE \ + --backup-location-config region=$REGION \ + --use-node-agent \ + --use-volume-snapshots=false \ + --default-volumes-to-fs-backup +``` Verify the Velero installation: -\`\`\`command {title="Verify the Velero installation"} - -| kubectl logs deployment/velero \\ \-n velero \\ \--kubeconfig \~/.kube/lke-config \\ | grep 'BackupStorageLocations is valid' | -| :---- | - -\`\`\` - -\`\`\`output +```command {title="Verify the Velero installation"} +kubectl logs deployment/velero \ + -n velero \ + --kubeconfig ~/.kube/lke-config \ + | grep 'BackupStorageLocations is valid' +``` -| Defaulted container "velero" out of: velero, velero-velero-plugin-for-aws (init) time="2025-05-31T20:52:50Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup\_storage\_location\_controller.go:128" | -| :---- | - -\`\`\` +```output +Defaulted container "velero" out of: velero, velero-velero-plugin-for-aws (init) +time="2025-05-31T20:52:50Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:128" +``` With the backup storage location properly configured, run this command to get information about existing backups. -\`\`\`command {title="Get backups"} - -| velero backup get \--kubeconfig \~/.kube/lke-config | -| :---- | - -\`\`\` +```command {title="Get backups"} +velero backup get --kubeconfig ~/.kube/lke-config +``` -\`\`\`output - -| NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR test\-backup Completed 0 0 2025-05-31 21:44:31 \+0300 IDT 29d default \ | -| :---- | - -\`\`\` +```output +NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR +test-backup Completed 0 0 2025-05-31 21:44:31 +0300 IDT 29d default +``` ## Restore the Backup in LKE Now, use Velero to restore your source cluster backup into your destination cluster at LKE. -\`\`\`command {title="Use Velero to restore a backup"} - -| velero restore create test\-restore \\ \--from-backup test\-backup \\ \--kubeconfig \~/.kube/lke-config | -| :---- | +```command {title="Use Velero to restore a backup"} +velero restore create test-restore \ + --from-backup test-backup \ + --kubeconfig ~/.kube/lke-config +``` -\`\`\` +```output +Restore request "test-restore" submitted successfully. +Run `velero restore describe test-restore` or `velero restore logs test-restore` for more details. +``` -\`\`\`output +Check the restore status with the following command: -| Restore request "test-restore" submitted successfully. Run \`velero restore describe test\-restore\` or \`velero restore logs test\-restore\` for more details. | -| :---- | - -\`\`\` - -Check the restore status with the following command: -\`\`\`command {title="Check restore status"} - -| velero restore describe test\-restore \--kubeconfig \~/.kube/lke-config | -| :---- | - -\`\`\` +```command {title="Check restore status"} +velero restore describe test-restore --kubeconfig ~/.kube/lke-config +``` ## Post-Restore Adjustments @@ -601,131 +763,125 @@ Because you are transitioning from one Kubernetes provider to another, you may n For example, if your destination cluster is at LKE, you will want to update your PVC to use the Linode storage class. Review the Linode CSI drivers with the following command: -\`\`\`command {title="See current CSI drivers"} - -| kubectl get csidrivers \--kubeconfig \~/.kube/lke-config | -| :---- | +```command {title="See current CSI drivers"} +kubectl get csidrivers --kubeconfig ~/.kube/lke-config +``` -\`\`\` - -\`\`\`output - -| NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE ebs.csi.aws.com true false false \ false Persistent 22m efs.csi.aws.com false false false \ false Persistent 22m linodebs.csi.linode.com true true false \ false Persistent 69m | -| :---- | - -\`\`\` +```output +NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE +ebs.csi.aws.com true false false false Persistent 22m +efs.csi.aws.com false false false false Persistent 22m +linodebs.csi.linode.com true true false false Persistent 69m +``` Review the available storage classes: -\`\`\`command {title="Review available storage classes"} - -| kubectl get storageclass \--kubeconfig \~/.kube/lke-config | -| :---- | - -\`\`\` - -\`\`\`output - -| NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ebs-sc ebs.csi.aws.com Delete WaitForFirstConsumer true 6h22m gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 6h22m linode-block-storage linodebs.csi.linode.com Delete Immediate true 7h9m linode-block-storage-retain (default) linodebs.csi.linode.com Retain Immediate true 7h9m | -| :---- | - -\`\`\` - -Use the default \`linode-block-storage-retain\` storage class. However, you must first delete the restored PVC and recreate it with the new storage class. - -\`\`\`command {title="Delete the restored PVC"} - -| kubectl delete pvc the-pvc \--kubeconfig \~/.kube/lke-config persistentvolumeclaim "the-pvc" deleted | -| :---- | - -\`\`\` - -\`\`\`command {title="Recreate the PVC with the new storage class"} - -| echo ' apiVersion: v1 kind: PersistentVolumeClaim metadata: name: the-pvc spec: accessModes: \- ReadWriteOnce resources: requests: storage: 1Mi ' | kubectl apply \-f \- \--kubeconfig \~/.kube/lke-config | -| :---- | - -\`\`\` - -\`\`\`output - -| persistentvolumeclaim/the-pvc created | -| :---- | - -\`\`\` +```command {title="Review available storage classes"} +kubectl get storageclass --kubeconfig ~/.kube/lke-config +``` + +```output +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +ebs-sc ebs.csi.aws.com Delete WaitForFirstConsumer true 6h22m +gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 6h22m +linode-block-storage linodebs.csi.linode.com Delete Immediate true 7h9m +linode-block-storage-retain (default) linodebs.csi.linode.com Retain Immediate true 7h9m +``` + +Use the default `linode-block-storage-retain` storage class. However, you must first delete the restored PVC and recreate it with the new storage class. + +```command {title="Delete the restored PVC"} +kubectl delete pvc the-pvc --kubeconfig ~/.kube/lke-config +persistentvolumeclaim "the-pvc" deleted +``` + +```command {title="Recreate the PVC with the new storage class"} +echo ' +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: the-pvc +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Mi +' | kubectl apply -f - --kubeconfig ~/.kube/lke-config +``` + +```output +persistentvolumeclaim/the-pvc created +``` The new PVC is bound to a new persistent volume. Run the following command to see this: -\`\`\`command {title="Get information about PVC, PV, and pod"} - -| kubectl get pvc,pv,pod \--kubeconfig \~/.kube/lke-config | -| :---- | - -\`\`\` - -\`\`\`output +```command {title="Get information about PVC, PV, and pod"} +kubectl get pvc,pv,pod --kubeconfig ~/.kube/lke-config +``` -| NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/the-pvc Bound pvc-711d050fae7641ee 10Gi RWO linode-block-storage-retain \ 2m12s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE persistentvolume/pvc-711d050fae7641ee 10Gi RWO Retain Bound default/the-pvc linode-block-storage-retain \ 2m9s NAME READY STATUS RESTARTS AGE pod/the-pod 0/1 Init:0/1 0 6h38m | -| :---- | +```output +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +persistentvolumeclaim/the-pvc Bound pvc-711d050fae7641ee 10Gi RWO linode-block-storage-retain 2m12s -\`\`\` +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +persistentvolume/pvc-711d050fae7641ee 10Gi RWO Retain Bound default/the-pvc linode-block-storage-retain 2m9s -Unfortunately, you'll see that the pod is in an \`Init\` state as it is trying to bind to the previous (and now invalid) PVC. You need to delete the pod, stop the blocked restore (by first deleting the finalizer), and re-run the restore. +NAME READY STATUS RESTARTS AGE +pod/the-pod 0/1 Init:0/1 0 6h38m +``` -\`\`\`command {title="Delete pod and stop the blocked restore"} +Unfortunately, you'll see that the pod is in an `Init` state as it is trying to bind to the previous (and now invalid) PVC. You need to delete the pod, stop the blocked restore (by first deleting the finalizer), and re-run the restore. -| kubectl delete pod the-pod \--kubeconfig \~/.kube/lke-config kubectl patch restore test\-restore \\ \--patch '{"metadata":{"finalizers":\[\]}}' \\ \--type merge \\ \-n velero \\ \--kubeconfig \~/.kube/lke-config kubectl delete restore test-restore \\ \-n velero \\ \--kubeconfig \~/.kube/lke-config | -| :---- | +```command {title="Delete pod and stop the blocked restore"} +kubectl delete pod the-pod --kubeconfig ~/.kube/lke-config -\`\`\` +kubectl patch restore test-restore \ + --patch '{"metadata":{"finalizers":[]}}' \ + --type merge \ + -n velero \ + --kubeconfig ~/.kube/lke-config -Now, re-run the restore. Velero is smart enough to detect that the PVC (called \`the-pvc\`) exists and will not overwrite it unless explicitly requested to do so. +kubectl delete restore test-restore \ + -n velero \ + --kubeconfig ~/.kube/lke-config +``` -\`\`\`command {title="Re-run the Velero restore"} +Now, re-run the restore. Velero is smart enough to detect that the PVC (called `the-pvc`) exists and will not overwrite it unless explicitly requested to do so. -| velero restore create test\-restore \\ \--from-backup test\-backup \\ \--kubeconfig \~/.kube/lke-config | -| :---- | +```command {title="Re-run the Velero restore"} +velero restore create test-restore \ + --from-backup test-backup \ + --kubeconfig ~/.kube/lke-config +``` -\`\`\` - -\`\`\`output - -| Restore request "test-restore" submitted successfully. Run \`velero restore describe test\-restore\` or \`velero restore logs test\-restore\` for more details. | -| :---- | - -\`\`\` +```output +Restore request "test-restore" submitted successfully. +Run `velero restore describe test-restore` or `velero restore logs test-restore` for more details. +``` Verify your pod was restored. -\`\`\`command {title="Verify successful pod restore"} - -| kubectl get pod the-pod \--kubeconfig \~/.kube/lke-config | -| :---- | - -\`\`\` - -\`\`\`output - -| NAME READY STATUS RESTARTS AGE the-pod 1/1 Running 0 118s | -| :---- | - -\`\`\` - -The pod is \`Running\`. Now, verify the volume is mounted and you can access on LKE the data that was written to the EBS volume on AWS. - -\`\`\`command {title="Run the pod and show the sample data that was written"} +```command {title="Verify successful pod restore"} +kubectl get pod the-pod --kubeconfig ~/.kube/lke-config +``` -| kubectl exec the-pod \--kubeconfig \~/.kube/lke-config \-- cat /data/some-data.txt | -| :---- | +```output +NAME READY STATUS RESTARTS AGE +the-pod 1/1 Running 0 118s +``` -\`\`\` +The pod is `Running`. Now, verify the volume is mounted and you can access on LKE the data that was written to the EBS volume on AWS. -\`\`\`output +```command {title="Run the pod and show the sample data that was written"} +kubectl exec the-pod --kubeconfig ~/.kube/lke-config -- cat /data/some-data.txt +``` -| Defaulted container "the-container" out of: the-container, restore-wait (init) Some data | -| :---- | +```output +Defaulted container "the-container" out of: the-container, restore-wait (init) +Some data +``` -\`\`\` You have successfully performed an end-to-end backup and restore of a Kubernetes cluster (in this example, on AWS EKS) to a Linode LKE cluster, and this included persistent data migration across two different cloud object storage systems. ## Final Considerations @@ -752,19 +908,19 @@ When downtime is unavoidable, then a safer approach is to schedule it. Perform a While this guide focuses on migration, Velero can also support a multi-cloud Kubernetes strategy. By configuring Velero with backup locations across multiple cloud providers, you could: -* Create a resilient disaster recovery setup by backup up workloads from one cluster and restoring them into another in a different cloud -* Enable workload portability between environments, which may be helpful for hybrid deployments or to meet data redundancy requirements for compliance reasons. +- Create a resilient disaster recovery setup by backup up workloads from one cluster and restoring them into another in a different cloud. +- Enable workload portability between environments, which may be helpful for hybrid deployments or to meet data redundancy requirements for compliance reasons. The resources below are provided to help you become familiar with Velero when migrating your Kubernetes cluster to Linode LKE. -## \#\# Additional Resources - -\- Velero - \- \[Documentation Home\]([https://velero.io/docs/v1.16/](https://velero.io/docs/v1.16/)) - \- \[Installing the Velero CLI\](https://velero.io/docs/v1.16/basic-install/\#install-the-cli) - \- \[Storage provider plugins\]([https://velero.io/docs/v1.16/supported-providers/](https://velero.io/docs/v1.16/supported-providers/)) -\- Akamai Cloud - \- \[Linode LKE\]([https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine](https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine)) - \- \[Migrating from AWS EKS to Linode Kubernetes Engine (LKE)\]([https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/)) - \- \[Migrating from Azure AKS to Linode Kubernetes Engine (LKE)\]([https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/)) - \- \[Migrating from Google GKE to Linode Kubernetes Engine (LKE)\]([https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/)) \ No newline at end of file +## Additional Resources + +- Velero: + - [Documentation Home]([https://velero.io/docs/v1.16/](https://velero.io/docs/v1.16/)) + - [Installing the Velero CLI](https://velero.io/docs/v1.16/basic-install/#install-the-cli) + - [Storage provider plugins][https://velero.io/docs/v1.16/supported-providers/] +- Akamai Cloud: + - [Linode LKE]([https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine](https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine)) + - [Migrating from AWS EKS to Linode Kubernetes Engine (LKE)]([https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/)) + - [Migrating from Azure AKS to Linode Kubernetes Engine (LKE)]([https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/)) + - [Migrating from Google GKE to Linode Kubernetes Engine (LKE)]([https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/)) \ No newline at end of file From 55ca5722bc8a19fb43294baf59e3e1286abf7f64 Mon Sep 17 00:00:00 2001 From: Adam Overa Date: Fri, 18 Jul 2025 18:19:23 -0400 Subject: [PATCH 3/4] CI Tests Fix 1 --- .../index.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md index 3b3db77c0bf..1fe2fb3fd46 100644 --- a/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md +++ b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md @@ -28,7 +28,7 @@ In scenarios such as these, DevOps engineers may depend on Velero. [**Velero**](https://velero.io/) is an open source, Kubernetes-native tool for backing up and restoring Kubernetes resources and persistent volumes. It supports backup of core resources, namespaces, deployments, services, ConfigMaps, Secrets, and customer resource definitions (CRDs). It integrates with different storage backends—such AWS S3 or Linode Object Storage—for storing and restoring backups. -This guide will walk through the process of using Velero to migrate a Kubernetes cluster with persistent volumes to Linode Kubernetes Engine (LKE). The focus of the guide will be on backing up and restoring a persistent data volume. For other aspects—such as adapting load balancing and DNS switching after the restore—refer to the Akamai Cloud guides on migrating to LKE (from [AWS EKS](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/), [Google GKE](https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/), [Azure AKS](https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/), or [Oracle OKE](https://www.linode.com/docs/guides/migrating-from-oracle-kubernetes-engine-to-linode-kubernetes-engine-lke/)). +This guide will walk through the process of using Velero to migrate a Kubernetes cluster with persistent volumes to Linode Kubernetes Engine (LKE). The focus of the guide will be on backing up and restoring a persistent data volume. For other aspects—such as adapting load balancing and DNS switching after the restore—refer to the Akamai Cloud guides on migrating to LKE (from [AWS EKS](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/), [Google GKE](/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/), [Azure AKS](/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/), or [Oracle OKE](/docs/guides/migrating-from-oracle-kubernetes-engine-to-linode-kubernetes-engine-lke/)). Although what's shown in this guide will start with an AWS EKS cluster as an example, the same process can apply to most Kubernetes providers. @@ -40,7 +40,7 @@ Although what's shown in this guide will start with an AWS EKS cluster as an exa 1. Follow the steps in the _*Install* `*kubectl*`_ section of the [Getting started with LKE](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#install-kubectl) guide to install and configure `kubectl`. 1. If migrating a cluster from AWS, ensure that you have access to your AWS account with sufficient permissions to work with EKS clusters. 1. Install and configure the [AWS CLI](https://aws.amazon.com/cli/) and [`eksctl`](https://eksctl.io/). The command line tooling you use may vary if migrating a cluster from another provider. -1. Install `[jq](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/docs/guides/using-jq-to-process-json-on-the-command-line/#install-jq-with-package-managers)`. +1. Install `[jq](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/docs/guides/using-jq-to-process-json-on-the-command-line/#install-jq-with-package-managers)`. 1. Install the `[velero](https://velero.io/docs/v1.3.0/velero-install/)` [CLI](https://velero.io/docs/v1.3.0/velero-install/). ## Downtime During the Migration @@ -395,8 +395,8 @@ velero version ```output Client: - Version: v1.16.1 - Git commit: - + Version: v1.16.1 + Git commit: - Server: Version: v1.16.1 ``` @@ -920,7 +920,7 @@ The resources below are provided to help you become familiar with Velero when mi - [Installing the Velero CLI](https://velero.io/docs/v1.16/basic-install/#install-the-cli) - [Storage provider plugins][https://velero.io/docs/v1.16/supported-providers/] - Akamai Cloud: - - [Linode LKE]([https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine](https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine)) - - [Migrating from AWS EKS to Linode Kubernetes Engine (LKE)]([https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/)) - - [Migrating from Azure AKS to Linode Kubernetes Engine (LKE)]([https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/)) - - [Migrating from Google GKE to Linode Kubernetes Engine (LKE)]([https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/](https://www.linode.com/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/)) \ No newline at end of file + - [Linode LKE](https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine) + - [Migrating from AWS EKS to Linode Kubernetes Engine (LKE)](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/) + - [Migrating from Azure AKS to Linode Kubernetes Engine (LKE)](/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/) + - [Migrating from Google GKE to Linode Kubernetes Engine (LKE)](/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/) \ No newline at end of file From 3271f5c74e13518c7ae60e7e935daafb4494c40d Mon Sep 17 00:00:00 2001 From: Adam Overa Date: Fri, 8 Aug 2025 11:19:42 -0400 Subject: [PATCH 4/4] Tech Edit 1 --- ci/vale/dictionary.txt | 2 + .../index.md | 1775 ++++++++++------- 2 files changed, 1022 insertions(+), 755 deletions(-) diff --git a/ci/vale/dictionary.txt b/ci/vale/dictionary.txt index c0308c10455..3730ead30d8 100644 --- a/ci/vale/dictionary.txt +++ b/ci/vale/dictionary.txt @@ -88,6 +88,7 @@ architecting aren Argocd argv +ARNs arpack arping arptables @@ -773,6 +774,7 @@ filezilla filimonov findtime finalizer +finalizers finnix fintech firefart diff --git a/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md index 1fe2fb3fd46..17b4533d32a 100644 --- a/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md +++ b/docs/guides/platform/migrate-to-linode/migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero/index.md @@ -1,62 +1,92 @@ --- slug: migrating-kubernetes-workloads-to-linode-kubernetes-engine-lke-using-velero title: "Migrating Kubernetes Workloads to Linode Kubernetes Engine (LKE) Using Velero" -description: "Two to three sentences describing your guide." +description: "Migrate Kubernetes workloads and persistent volumes to Linode Kubernetes Engine (LKE) using Velero with CSI snapshots and file-system backup strategies." authors: ["Akamai"] contributors: ["Akamai"] -published: 2025-07-18 -keywords: ['list','of','keywords','and key phrases'] +published: 2025-07-21 +keywords: ['kubernetes migration', 'velero', 'linode kubernetes engine', 'lke', 'persistent volume', 'csi snapshots', 'disaster recovery', 'multi-cloud backup', 'migrate kubernetes workloads to lke with velero', 'velero backup and restore guide', 'persistent volume migration using velero', 'csi snapshot backup for kubernetes', 'multi-cloud kubernetes disaster recovery strategy', 'linode lke persistent data migration'] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' external_resources: -- '[Link Title 1](http://www.example.com)' -- '[Link Title 2](http://www.example.net)' +- '[Velero Documentation Home]([https://velero.io/docs/v1.16/](https://velero.io/docs/v1.16/)' +- '[Installing the Velero CLI](https://velero.io/docs/v1.16/basic-install/#install-the-cli)' +- '[Velero Storage Provider Plugins](https://velero.io/docs/v1.16/supported-providers/)' --- -Migrating a Kubernetes cluster has several use cases, including disaster recovery (for example, when your primary Kubernetes provider suffers an incident) or the need to change providers for feature or cost reasons. +The primary reasons organizations migrate Kubernetes clusters are disaster recovery and switching providers (usually for feature or cost reasons). -Performing this migration safely requires taking a complete snapshot of all the resources in the source cluster and then restoring that snapshot on the target cluster. After snapshot restoration, all external traffic is pointed to the new cluster, and the old cluster (if it can be accessed) is shut down. +Performing this migration safely requires taking a complete snapshot of all resources in the source cluster, then restoring that snapshot on the target cluster. After snapshot restoration, all external traffic is pointed to the new cluster, and the old cluster is shut down (assuming it can still be accessed). -Deploying Kubernetes resources can be straightforward if you have a solid CI/CD pipeline in place. However, there may be reasons why you can't simply point your CI/CD pipeline to the new cluster to handle the migration of all resources, including: +Deploying Kubernetes resources can be straightforward with a solid CI/CD pipeline in place. However, there are several reasons that could prevent you from simply pointing your CI/CD pipeline to the new cluster, including: -- Your CI/CD pipeline itself may be running in the source cluster and could be inaccessible. -- Some resources—like secrets—are provisioned using different processes, separate from CI/CD. -- Your persistent data volumes contain important data that can't be copied over using your CI/CD pipeline. +- Your CI/CD pipeline itself runs in the source cluster. +- Some resources, such as secrets, are provisioned outside your CI/CD pipeline. +- Persistent data volumes hold data that your CI/CD pipeline cannot copy. -In scenarios such as these, DevOps engineers may depend on Velero. +In scenarios such as these, DevOps engineers may look to Velero. -### What Is Velero? +## What Is Velero? -[**Velero**](https://velero.io/) is an open source, Kubernetes-native tool for backing up and restoring Kubernetes resources and persistent volumes. It supports backup of core resources, namespaces, deployments, services, ConfigMaps, Secrets, and customer resource definitions (CRDs). It integrates with different storage backends—such AWS S3 or Linode Object Storage—for storing and restoring backups. +[**Velero**](https://velero.io/) is an open source, Kubernetes-native tool for backing up and restoring Kubernetes resources and persistent volumes. It supports backup of core resources, namespaces, deployments, services, ConfigMaps, Secrets, and Custom Resource Definitions (CRDs). It integrates with different storage backends for storing and restoring backups, including AWS S3 and Linode Object Storage. -This guide will walk through the process of using Velero to migrate a Kubernetes cluster with persistent volumes to Linode Kubernetes Engine (LKE). The focus of the guide will be on backing up and restoring a persistent data volume. For other aspects—such as adapting load balancing and DNS switching after the restore—refer to the Akamai Cloud guides on migrating to LKE (from [AWS EKS](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/), [Google GKE](/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/), [Azure AKS](/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/), or [Oracle OKE](/docs/guides/migrating-from-oracle-kubernetes-engine-to-linode-kubernetes-engine-lke/)). +## Before You Begin -Although what's shown in this guide will start with an AWS EKS cluster as an example, the same process can apply to most Kubernetes providers. +This guide walks through the process of using Velero to migrate a Kubernetes cluster with persistent volumes to [Linode Kubernetes Engine (LKE)](https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine). The focus of the guide is on backing up and restoring persistent data volumes. For other migration concerns (e.g. adapting load balancing or DNS switching after the restore), refer to the appropriate Akamai Cloud guides on migrating to LKE from: +- [AWS EKS](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/) +- [Google GKE](/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/) +- [Azure AKS](/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/) +- [Oracle OKE](/docs/guides/migrating-from-oracle-kubernetes-engine-to-linode-kubernetes-engine-lke/) -## Before You Begin +While the example in this guide starts with an AWS EKS cluster, the same process can apply to most Kubernetes providers. + +{{< note >}} +EKS `t3.micro` nodes lack sufficient memory and pod capacity for running Velero reliably. This guide uses `t3.small` as the minimum functional node type for EKS-based Velero testing. +{{< /note >}} 1. Follow Akamai's [Getting Started](https://techdocs.akamai.com/cloud-computing/docs/getting-started) guide, and create an Akamai Cloud account if you do not already have one. 1. Create a personal access token using the instructions in the [Manage personal access tokens](https://techdocs.akamai.com/cloud-computing/docs/manage-personal-access-tokens) guide. 1. Install the Linode CLI using the instructions in the [Install and configure the CLI](https://techdocs.akamai.com/cloud-computing/docs/install-and-configure-the-cli) guide. -1. Follow the steps in the _*Install* `*kubectl*`_ section of the [Getting started with LKE](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#install-kubectl) guide to install and configure `kubectl`. +1. Follow the steps in the *Install `kubectl`* section of the [Getting started with LKE](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#install-kubectl) guide to install and configure `kubectl`. 1. If migrating a cluster from AWS, ensure that you have access to your AWS account with sufficient permissions to work with EKS clusters. 1. Install and configure the [AWS CLI](https://aws.amazon.com/cli/) and [`eksctl`](https://eksctl.io/). The command line tooling you use may vary if migrating a cluster from another provider. -1. Install `[jq](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/docs/guides/using-jq-to-process-json-on-the-command-line/#install-jq-with-package-managers)`. -1. Install the `[velero](https://velero.io/docs/v1.3.0/velero-install/)` [CLI](https://velero.io/docs/v1.3.0/velero-install/). +1. Install [`jq`](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/docs/guides/using-jq-to-process-json-on-the-command-line/#install-jq-with-package-managers). +1. Install the [`velero`](https://velero.io/docs/v1.3.0/velero-install/) [CLI](https://velero.io/docs/v1.3.0/velero-install/). + +### Using This Guide + +This tutorial contains a number of placeholders that are intended to be replaced by your own unique values. For reference purposes, the table below lists these placeholders, what they represent, and the example values used in this guide: + +| Placeholder | Represents | Example Value | +|--------------------|------------------------------------------------------|---------------------------------------------------------| +| `EKS_CLUSTER` | The name of your AWS EKS cluster. | `my-source-k8s-cluster` | +| `AWS_REGION` | The AWS region for both EKS and S3. | `us-west-2` | +| `ACCOUNT_ID` | Your AWS account ID (used in ARNs and OIDC ID). | `431966127852` | +| `OIDC_ID` | The OIDC provider ID of the EKS cluster. | `50167EE12C1795D19075628E119` | +| `BUCKET_NAME` | The name of the S3 bucket used by Velero. | `velero-backup-7777` | +| `POLICY_ARN` | The ARN of the created IAM policy. | `arn:aws:iam::431966127852:policy/VeleroS3AccessPolicy` | +| `CREDENTIALS_FILE` | The path to the credentials file created for Velero. | `~/aws-credentials-velero` | +| `CLUSTER_ID` | The numeric ID of the target LKE cluster. | `463649` | + +{{< note title="All Values Have Been Sanitized" >}} +All of the example values used in this guide are purely examples to mimic and display the format of actual secrets. Nothing listed is a real credential to any existing system. + +When creating your own values, **do not** use any of the above credentials. +{{< /note >}} ## Downtime During the Migration -The migration process shown in this guide will involve some downtime. Keep in mind the following considerations during the migration: +The migration process shown in this guide involves some downtime. Keep in mind the following considerations during the migration: -- Double capacity might be required, so be aware of your usage quotas and limits. -- Both clusters (if available) might run concurrently for a period of time. -- Data will need to be read from and written to both clusters to keep them in sync. Appropriate read/write permissions must be in place. -- Incrementally by workloads, access to the source cluster will become read-only and eventually removed. -- Unified observability across both clusters may be beneficial. -- If problems occur on the new cluster, you will need the ability to roll back any workload. +- **Temporary Double Capacity:** Verify quotas/limits so you can run both old and new clusters in parallel. +- **Concurrent Operation:** Both clusters may run simultaneously while you validate workloads. +- **Dual Read/Write Paths:** Data needs to flow to and from both clusters, so ensure the appropriate permissions. +- **Staged Lockdown of the Source:** Gradually make the source cluster read‑only, then decommission it. +- **Unified Observability:** Monitor both clusters with the same tooling to spot issues quickly. +- **Rollback Capability:** Be ready to revert any workload if the target cluster misbehaves. ## Prepare the Source Cluster for Velero Usage -The starting point for this guide is an AWS EKS cluster that has already been provisioned in AWS’s `us-west-2` region. Before installing and using Velero, take the following steps to prepare your source cluster. +This guide starts from an existing AWS EKS cluster in the `us-west-2` region. Before installing and using Velero, take the following steps to prepare your source cluster. 1. **Associate the EKS cluster with an OIDC provider**: Enables Kubernetes service accounts to securely assume AWS IAM roles. 1. **Provision EBS CSI support in the cluster**: Allows Kubernetes to dynamically provision and manage EBS volumes. @@ -64,863 +94,1098 @@ The starting point for this guide is an AWS EKS cluster that has already been pr 1. **Create an S3 bucket for storing Velero backups**: Sets up the location for Velero to save and retrieve backup data and snapshots. 1. **Set up IAM credentials for Velero to use S3**: Grants Velero the necessary permissions to access the S3 bucket for backup and restore operations. -With these pieces in place, you'll be ready to install Velero with the necessary permissions and infrastructure to back up workloads—including persistent volume data—from the EKS cluster to S3. +With these in place, you can install Velero with the necessary permissions and infrastructure to back up workloads (including persistent volume data) from EKS to S3. ### Associate the Cluster with an OIDC Provider -An OIDC provider is required to enable [IAM roles for service accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), which is the recommended way for Velero to authenticate to AWS services like S3. +An OIDC provider is required to enable [IAM roles for service accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). This is the recommended way for Velero to authenticate to AWS services like S3. -```command {title="Set initial environment variables for terminal session"} -export AWS_PROFILE='INSERT YOUR AWS PROFILE' -export EKS_CLUSTER="my-source-k8s-cluster" -export REGION="us-west-2" -export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) -``` +1. First, set the initial environment variables for the terminal session, replacing {{< placeholder "EKS_CLUSTER" >}} and {{< placeholder "AWS_REGION" >}}: -[Create the OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) with the following command: + ```command + export EKS_CLUSTER="{{< placeholder "EKS_CLUSTER" >}}" + export AWS_REGION="{{< placeholder "AWS_REGION" >}}" + export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) + ``` -```command {title="Create OIDC provider"} -eksctl utils associate-iam-oidc-provider \ - --cluster "$EKS_CLUSTER" \ - --region "$REGION" \ - --approve -``` +1. [Create the OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) with the following command: -```output -2025-05-31 11:51:46 [ℹ] will create IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" -2025-05-31 11:51:47 [✔] created IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" -``` + ```command + eksctl utils associate-iam-oidc-provider \ + --cluster "$EKS_CLUSTER" \ + --region "$AWS_REGION" \ + --approve + ``` -Verify that OIDC creation was successful. + ```output + 2025-05-31 11:51:46 [ℹ] will create IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" + 2025-05-31 11:51:47 [✔] created IAM Open ID Connect provider for cluster "my-source-k8s-cluster" in "us-west-2" + ``` -```command {title="Verify successful OIDC creation"} -aws eks describe-cluster \ - --name "$EKS_CLUSTER" \ - --region "$REGION" \ - --query "cluster.identity.oidc.issuer" \ - --output text -``` +1. Verify that the OIDC creation was successful: -```output -https://oidc.eks.us-west-2.amazonaws.com/id/50167EE12C1795D19075628E119 -``` + ```command + aws eks describe-cluster \ + --name "$EKS_CLUSTER" \ + --region "$AWS_REGION" \ + --query "cluster.identity.oidc.issuer" \ + --output text + ``` -Capture the last part of the output string with the OIDC provider ID and store it as an environment variable: + ```output + https://oidc.eks.us-west-2.amazonaws.com/id/50167EE12C1795D19075628E119 + ``` -```command {title="Store OIDC provider id as environment variable"} -export OIDC_ID=50167EE12C1795D19075628E119 -``` +1. Capture the last part of the output string with the OIDC provider ID and store it as an environment variable, for example: -### Provision EBS CSI Support in the Cluster - -The CSI provisioner is a plugin that allows Kubernetes to create and manage storage volumes—like EBS disks—on demand, whenever a `PersistentVolumeClaim` (PVC) is made. Provisioning EBS CSI support requires a few steps. + ```command + export OIDC_ID=50167EE12C1795D19075628E119 + ``` -Create an IAM role for the EBS CSI driver with the trust policy for OIDC. +### Provision EBS CSI Support in the Cluster -```command {title="Create IAM role for EBS CSI driver"} -aws iam create-role \ - --role-name AmazonEKS_EBS_CSI_DriverRole \ - --assume-role-policy-document "{ - \"Version\": \"2012-10-17\", - \"Statement\": [ - { - \"Effect\": \"Allow\", - \"Principal\": { - \"Federated\": \"arn:aws:iam::${ACCOUNT_ID}:oidc-provider/oidc.eks.${REGION}.amazonaws.com/id/${OIDC_ID}\" - }, - \"Action\": \"sts:AssumeRoleWithWebIdentity\", - \"Condition\": { - \"StringEquals\": { - \"oidc.eks.${REGION}.amazonaws.com/id/${OIDC_ID}:sub\": \"system:serviceaccount:kube-system:ebs-csi-controller-sa\" +The CSI provisioner is a plugin that allows Kubernetes to create and manage storage volumes (e.g. EBS disks) on demand, whenever a `PersistentVolumeClaim` (PVC) is made. + +1. Use `cat` to create a file called `trust-policy.json`: + + ```command + cat > trust-policy.json <}} with a name of your choice (e.g. `velero-backup-7777`) and add it to your environment variables: -```command {title="Add the BUCKET_NAME environment variable to the terminal session"} -export BUCKET_NAME=velero-backup-7777 -``` + ```command + export BUCKET_NAME={{< placeholder "BUCKET_NAME" >}} + ``` -```command {title="Create S3 bucket"} -aws s3api create-bucket \ - --bucket "$BUCKET_NAME" \ - --region "$REGION" \ - --create-bucket-configuration LocationConstraint="$REGION" -``` +1. Create the S3 bucket where Velero can store its backups: -```output -{ - "Location": "http://velero-backup-7777.s3.amazonaws.com/" -} -``` + ```command + aws s3api create-bucket \ + --bucket "$BUCKET_NAME" \ + --region "$AWS_REGION" \ + --create-bucket-configuration LocationConstraint="$AWS_REGION" + ``` -The bucket should not be public. Only Velero should access it. + ```output + { + "Location": "http://velero-backup-7777.s3.amazonaws.com/" + } + ``` + + {{< note >}} + This full command works in all AWS regions except `us-east-1` (N. Virginia), where including `--create-bucket-configuration` causes an `InvalidLocationConstraint` error: + + ```output + An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid + ``` + + If you’re using the `us-east-1` AWS region, run this shortened version of the command instead: -```command {title="Block public access to S3 bucket"} -aws s3api put-public-access-block \ - --bucket "$BUCKET_NAME" \ - --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true -``` + ```command + aws s3api create-bucket \ + --bucket "$BUCKET_NAME" \ + --region "$AWS_REGION" + ``` + {{< /note >}} + +1. Block public access to S3 bucket (only Velero should access it): + + ```command + aws s3api put-public-access-block \ + --bucket "$BUCKET_NAME" \ + --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true + ``` ### Set up IAM Credentials for Velero to Use S3 To give Velero access to the S3 bucket, begin by creating the IAM policy. -```command {title="Create IAM policy for Velero to access S3, then echo policy ARN"} -POLICY_ARN=$(aws iam create-policy \ - --policy-name VeleroS3AccessPolicy \ - --policy-document "{ - \"Version\": \"2012-10-17\", - \"Statement\": [ - { - \"Sid\": \"ListAndGetBucket\", - \"Effect\": \"Allow\", - \"Action\": [ - \"s3:ListBucket\", - \"s3:GetBucketLocation\" - ], - \"Resource\": \"arn:aws:s3:::$BUCKET_NAME\" - }, - { - \"Sid\": \"CRUDonObjects\", - \"Effect\": \"Allow\", - \"Action\": [ - \"s3:PutObject\", - \"s3:GetObject\", - \"s3:DeleteObject\" - ], - \"Resource\": \"arn:aws:s3:::$BUCKET_NAME/*\" - } - ] - }" \ - --query 'Policy.Arn' --output text) echo $POLICY_ARN -``` - -```output -arn:aws:iam::431966127852:policy/VeleroS3AccessPolicy -``` - -Create the Velero user and attach the policy. - -```command {title="Create Velero user and attach policy"} -aws iam create-user \ - --user-name velero - -aws iam attach-user-policy \ - --user-name velero \ - --policy-arn "$POLICY_ARN" -``` - -```output -{ - "User": { - "Path": "/", - "UserName": "velero", - "UserId": "AIDAWE6V6YHZ6334NZZ3Z", - "Arn": "arn:aws:iam::431966127852:user/velero", - "CreateDate": "2025-05-31T07:03:40+00:00" +1. Use `cat` to create the Velero S3 access policy in a file called `velero-s3-policy.json`: + + ```command + cat > velero-s3-policy.json < OUT; - print "aws_access_key_id = "$1 >> OUT; - print "aws_secret_access_key = "$2 >> OUT; - }' -``` + The `velero` IAM user now has access to the bucket. -Verify the credentials file was created successfully. +1. Create an environment variable to define where Velero’s AWS credentials should go: + + ```command + export CREDENTIALS_FILE=~/aws-credentials-velero + ``` + +1. Generate an access key for the `velero` user and write it to that file: + + ```command + aws iam create-access-key \ + --user-name velero \ + --query 'AccessKey.[AccessKeyId,SecretAccessKey]' \ + --output text | \ + awk -v OUT="$CREDENTIALS_FILE" ' + { + print "[default]" > OUT; + print "aws_access_key_id = "$1 >> OUT; + print "aws_secret_access_key = "$2 >> OUT; + }' + ``` + +1. Verify the credentials file was created successfully: + + ```command + cat "$CREDENTIALS_FILE" + ``` + + ```output + [default] + aws_access_key_id = AKIAFAKEACCESSKEY1234 + aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYFAKEKEY + ``` ## Install and Configure Velero on Source Cluster -With the source cluster properly prepared, you can install Velero on the EKS cluster, configured with the S3 backup location and credentials file that authorizes access to the bucket. - -```command {title="Install Velero on source cluster"} -velero install \ - --provider aws \ - --plugins velero/velero-plugin-for-aws:v1.12.0 \ - --bucket "$BUCKET_NAME" \ - --secret-file $CREDENTIALS_FILE \ - --backup-location-config region=$REGION \ - --use-node-agent \ - --use-volume-snapshots=false \ - --default-volumes-to-fs-backup -``` - -```output -CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource -CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client -CustomResourceDefinition/backuprepositories.velero.io: created -CustomResourceDefinition/backups.velero.io: attempting to create resource -CustomResourceDefinition/backups.velero.io: attempting to create resource client -CustomResourceDefinition/backups.velero.io: created -CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource -CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client -CustomResourceDefinition/backupstoragelocations.velero.io: created -CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource -CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client -CustomResourceDefinition/deletebackuprequests.velero.io: created -CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource -CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client -CustomResourceDefinition/downloadrequests.velero.io: created -CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource -CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client -CustomResourceDefinition/podvolumebackups.velero.io: created -CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource -CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client -CustomResourceDefinition/podvolumerestores.velero.io: created -CustomResourceDefinition/restores.velero.io: attempting to create resource -CustomResourceDefinition/restores.velero.io: attempting to create resource client -CustomResourceDefinition/restores.velero.io: created -CustomResourceDefinition/schedules.velero.io: attempting to create resource -CustomResourceDefinition/schedules.velero.io: attempting to create resource client -CustomResourceDefinition/schedules.velero.io: created -CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource -CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client -CustomResourceDefinition/serverstatusrequests.velero.io: created -CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource -CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client -CustomResourceDefinition/volumesnapshotlocations.velero.io: created -CustomResourceDefinition/datadownloads.velero.io: attempting to create resource -CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client -CustomResourceDefinition/datadownloads.velero.io: created -CustomResourceDefinition/datauploads.velero.io: attempting to create resource -CustomResourceDefinition/datauploads.velero.io: attempting to create resource client -CustomResourceDefinition/datauploads.velero.io: created -Waiting for resources to be ready in cluster... -Namespace/velero: attempting to create resource -Namespace/velero: attempting to create resource client -Namespace/velero: created -ClusterRoleBinding/velero: attempting to create resource -ClusterRoleBinding/velero: attempting to create resource client -ClusterRoleBinding/velero: created -ServiceAccount/velero: attempting to create resource -ServiceAccount/velero: attempting to create resource client -ServiceAccount/velero: created -Secret/cloud-credentials: attempting to create resource -Secret/cloud-credentials: attempting to create resource client -Secret/cloud-credentials: created -BackupStorageLocation/default: attempting to create resource -BackupStorageLocation/default: attempting to create resource client -BackupStorageLocation/default: created -Deployment/velero: attempting to create resource -Deployment/velero: attempting to create resource client -Deployment/velero: created -DaemonSet/node-agent: attempting to create resource -DaemonSet/node-agent: attempting to create resource client -DaemonSet/node-agent: created -Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status. -``` - -To perform its full range of tasks, Velero creates its own namespace, several CRDs, a deployment, a service, and a node agent. Verify the Velero installation. - -```command {title="Check Velero version"} -velero version -``` - -```output -Client: - Version: v1.16.1 - Git commit: - -Server: - Version: v1.16.1 -``` - -Check the pods in the `velero` namespace. - -```command {title="Get pods in Velero namespace"} -kubectl get pods -n velero -``` - -```output -NAME READY STATUS RESTARTS AGE -node-agent-chnzw 1/1 Running 0 59s -node-agent-ffqlg 1/1 Running 0 59s -velero-6f4546949d-kjtnv 1/1 Running 0 59s -``` - -Verify the backup location configured for Velero. - -```command {title="Get backup location for Velero"} -velero backup-location get -``` - -```output -NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT -default aws velero-backup-7777 Available 2025-05-31 10:12:12 +0300 IDT ReadWrite true -``` +With the source cluster properly configured with the S3 backup location and credentials file, you can install Velero on the EKS cluster. + +1. Install Velero on the source cluster: + + ```command + velero install \ + --provider aws \ + --plugins velero/velero-plugin-for-aws:v1.12.2 \ + --bucket "$BUCKET_NAME" \ + --secret-file "$CREDENTIALS_FILE" \ + --backup-location-config region=$AWS_REGION \ + --use-node-agent \ + --use-volume-snapshots=false \ + --default-volumes-to-fs-backup + ``` + + ```output + CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource + CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client + CustomResourceDefinition/backuprepositories.velero.io: created + CustomResourceDefinition/backups.velero.io: attempting to create resource + CustomResourceDefinition/backups.velero.io: attempting to create resource client + CustomResourceDefinition/backups.velero.io: created + CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource + CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client + CustomResourceDefinition/backupstoragelocations.velero.io: created + CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource + CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client + CustomResourceDefinition/deletebackuprequests.velero.io: created + CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource + CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client + CustomResourceDefinition/downloadrequests.velero.io: created + CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource + CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client + CustomResourceDefinition/podvolumebackups.velero.io: created + CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource + CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client + CustomResourceDefinition/podvolumerestores.velero.io: created + CustomResourceDefinition/restores.velero.io: attempting to create resource + CustomResourceDefinition/restores.velero.io: attempting to create resource client + CustomResourceDefinition/restores.velero.io: created + CustomResourceDefinition/schedules.velero.io: attempting to create resource + CustomResourceDefinition/schedules.velero.io: attempting to create resource client + CustomResourceDefinition/schedules.velero.io: created + CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource + CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client + CustomResourceDefinition/serverstatusrequests.velero.io: created + CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource + CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client + CustomResourceDefinition/volumesnapshotlocations.velero.io: created + CustomResourceDefinition/datadownloads.velero.io: attempting to create resource + CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client + CustomResourceDefinition/datadownloads.velero.io: created + CustomResourceDefinition/datauploads.velero.io: attempting to create resource + CustomResourceDefinition/datauploads.velero.io: attempting to create resource client + CustomResourceDefinition/datauploads.velero.io: created + Waiting for resources to be ready in cluster... + Namespace/velero: attempting to create resource + Namespace/velero: attempting to create resource client + Namespace/velero: created + ClusterRoleBinding/velero: attempting to create resource + ClusterRoleBinding/velero: attempting to create resource client + ClusterRoleBinding/velero: created + ServiceAccount/velero: attempting to create resource + ServiceAccount/velero: attempting to create resource client + ServiceAccount/velero: created + Secret/cloud-credentials: attempting to create resource + Secret/cloud-credentials: attempting to create resource client + Secret/cloud-credentials: created + BackupStorageLocation/default: attempting to create resource + BackupStorageLocation/default: attempting to create resource client + BackupStorageLocation/default: created + Deployment/velero: attempting to create resource + Deployment/velero: attempting to create resource client + Deployment/velero: created + DaemonSet/node-agent: attempting to create resource + DaemonSet/node-agent: attempting to create resource client + DaemonSet/node-agent: created + Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status. + ``` + + To perform its full range of tasks, Velero creates its own namespace, several CRDs, a deployment, a service, and a node agent. + +1. Verify the Velero installation: + + ```command + velero version + ``` + + ```output + Client: + Version: v1.16.2 + Git commit: - + Server: + Version: v1.16.2 + ``` + +1. Check the pods in the `velero` namespace: + + ```command + kubectl get pods -n velero + ``` + + ```output + NAME READY STATUS RESTARTS AGE + node-agent-chnzw 1/1 Running 0 59s + node-agent-ffqlg 1/1 Running 0 59s + velero-6f4546949d-kjtnv 1/1 Running 0 59s + ``` + +1. Verify the backup location configured for Velero: + + ```command + velero backup-location get + ``` + + ```output + NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT + default aws velero-backup-7777 Available 2025-05-31 10:12:12 +0300 IDT ReadWrite true + ``` ## Create a PersistentVolumeClaim in Source Cluster -In Kubernetes, the PersistentVolumeClaim (PVC) is the mechanism for creating persistent volumes that can be mounted to pods in the cluster. Create the PVC in the source cluster. +In Kubernetes, the PersistentVolumeClaim (PVC) is the mechanism for creating persistent volumes that can be mounted to pods in the cluster. + +1. Create the PVC in the source cluster: -```command {title="Create PersistentVolumeClaim"} -echo ' -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: the-pvc -spec: - accessModes: - - ReadWriteOnce - storageClassName: ebs-sc - resources: - requests: - storage: 1Mi -' | kubectl -n default apply -f - -``` + ```command + echo ' + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: the-pvc + spec: + accessModes: + - ReadWriteOnce + storageClassName: ebs-sc + resources: + requests: + storage: 1Gi + ' | kubectl -n default apply -f - + ``` -Note that this command uses the `StorageClass` named `ebs-sc`, which was created earlier. + Note that this command uses the `StorageClass` named `ebs-sc`, which was created earlier. -```output -persistentvolumeclaim/the-pvc created -``` + ```output + persistentvolumeclaim/the-pvc created + ``` -Verify the PVC was created successfully. +1. Verify that the PVC was created successfully: -```command {title="Get PVC"} -kubectl get pvc -n default -``` + ```command + kubectl get pvc -n default + ``` -```output -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE -the-pvc Pending ebs-sc 9s -``` + The status remains `Pending` until the first consumer uses it: -Its status should be `Pending`. This is by design, as the status remains `Pending` until the first consumer uses it. + ```output + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE + the-pvc Pending ebs-sc 9s + ``` ## Run a Pod to Use the PVC and Write Data -Once a pod mounts a volume backed by the PVC, a corresponding persistent volume (in this example, backed by AWS EBS) will be created. Run a pod to mount the volume with the following command: - -```command {title="Run a pod to mount the PVC-backed volume"} -kubectl run the-pod \ - --image=bash:latest \ - --restart=Never \ - -it \ - --overrides=' -{ - "apiVersion": "v1", - "spec": { - "volumes": [ - { - "name": "the-vol", - "persistentVolumeClaim": { - "claimName": "the-pvc" - } - } - ], - "containers": [ - { - "name": "the-container", - "image": "bash:latest", - "command": ["bash"], - "stdin": true, - "tty": true, - "volumeMounts": [ +When you mount the PVC in a pod, Kubernetes dynamically provisions a matching PersistentVolume (backed by AWS EBS in this example). + +1. Run a pod to mount the PVC-backed volume: + + ```command + kubectl run the-pod \ + --image=bash:latest \ + --restart=Never \ + -it \ + --overrides=' + { + "apiVersion": "v1", + "spec": { + "volumes": [ { - "mountPath": "/data", - "name": "the-vol" + "name": "the-vol", + "persistentVolumeClaim": { + "claimName": "the-pvc" + } + } + ], + "containers": [ + { + "name": "the-container", + "image": "bash:latest", + "command": ["bash"], + "stdin": true, + "tty": true, + "volumeMounts": [ + { + "mountPath": "/data", + "name": "the-vol" + } + ] } ] } - ] - } -}' \ - -- bash -``` + }' \ + -- bash + ``` + +1. From the open bash shell, write sample data into the volume: -From the open bash shell, write sample data into the volume. + ```command {title="bash Shell"} + echo "Some data" > /data/some-data.txt + cat /data/some-data.txt + ``` -```command {title="Use pod's bash shell to write sample data"} -echo "Some data" > /data/some-data.txt -cat /data/some-data.txt -``` + ```output + Some data + ``` -```output -Some data -``` +1. Do **not** exit this shell. Keeping this shell alive ensures the Pod stays in the `Running` state so that Velero can snapshot its volume. ## Create a Velero Backup, Then Verify -With Velero installed and the persistent volume in place, run the backup command: - -```command {title="Use Velero to create a backup"} -elero backup create test-backup --wait -``` - -```output -Backup request "test-backup" submitted successfully. -Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. -............................................................. -Backup completed with status: Completed. You may check for more information using the commands `velero backup describe test-backup` and `velero backup logs test-backup`. -``` - -After the backup process has completed, use the `backup describe` command to confirm a successful backup: - -```command {title="Describe the backup"} -velero backup describe test-backup -``` - -```output -Name: test-backup -Namespace: velero -Labels: velero.io/storage-location=default -Annotations: velero.io/resource-timeout=10m0s - velero.io/source-cluster-k8s-gitversion=v1.32.5-eks-5d4a308 - velero.io/source-cluster-k8s-major-version=1 - velero.io/source-cluster-k8s-minor-version=32 -Phase: Completed -Namespaces: - Included: * - Excluded: -Resources: - Included: * - Excluded: - Cluster-scoped: auto -Label selector: -Or label selector: -Storage Location: default -Velero-Native Snapshot PVs: auto -Snapshot Move Data: false -Data Mover: velero -TTL: 720h0m0s -CSISnapshotTimeout: 10m0s -ItemOperationTimeout: 4h0m0s -Hooks: -Backup Format Version: 1.1.0 -Started: 2025-05-31 21:44:31 +0300 IDT -Completed: 2025-05-31 21:45:33 +0300 IDT -Expiration: 2025-06-30 21:44:31 +0300 IDT -Total items to be backed up: 454 -Items backed up: 454 -Backup Volumes: - Velero-Native Snapshots: - CSI Snapshots: - Pod Volume Backups - kopia (specify --details for more information): - Completed: 11 -HooksAttempted: 0 -HooksFailed: 0 -``` - -The critical information to verify is the Kopia item for pod volume backups toward the end of the output. Note in the above example that it says `Completed: 11`. This verifies the presence of backups. +1. Open a **new terminal** so you can leave the Pod’s shell running uninterrupted. + +1. In that new terminal, use Velero to create a backup: + + ```command {title="New Terminal"} + velero backup create test-backup \ + --include-namespaces default \ + --wait + ``` + + ```output + Backup request "test-backup" submitted successfully. + Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. + ............................................................. + Backup completed with status: Completed. You may check for more information using the commands `velero backup describe test-backup` and `velero backup logs test-backup`. + ``` + +1. Once the backup process has completed, use the `backup describe` command to confirm a successful backup: + + ```command {title="New Terminal"} + velero backup describe test-backup + ``` + + ```output + Name: test-backup + Namespace: velero + Labels: velero.io/storage-location=default + Annotations: velero.io/resource-timeout=10m0s + velero.io/source-cluster-k8s-gitversion=v1.32.5-eks-5d4a308 + velero.io/source-cluster-k8s-major-version=1 + velero.io/source-cluster-k8s-minor-version=32 + + Phase: Completed + + + Namespaces: + Included: default + Excluded: + + Resources: + Included: * + Excluded: + Cluster-scoped: auto + + Label selector: + + Or label selector: + + Storage Location: default + + Velero-Native Snapshot PVs: auto + Snapshot Move Data: false + Data Mover: velero + + TTL: 720h0m0s + + CSISnapshotTimeout: 10m0s + ItemOperationTimeout: 4h0m0s + + Hooks: + + Backup Format Version: 1.1.0 + + Started: 2025-07-29 11:16:50 -0400 EDT + Completed: 2025-07-29 11:16:56 -0400 EDT + + Expiration: 2025-08-28 11:16:50 -0400 EDT + + Total items to be backed up: 16 + Items backed up: 16 + + Backup Volumes: + Velero-Native Snapshots: + + CSI Snapshots: + + Pod Volume Backups - kopia (specify --details for more information): + Completed: 1 + + HooksAttempted: 0 + HooksFailed: 0 + ``` + + The critical information to verify is the Kopia item for pod volume backups toward the end of the output. Note in the above example that it says `Completed: 1`. This verifies the presence of backups. + +1. Close the new terminal window and return to the original with the still-running bash shell. + +1. Exit the bash shell to terminate the Pod and return to your regular terminal prompt, where your environment variables are still in place for the next steps: + + ```command {title="bash Shell"} + exit + ``` ## Verify Backup in S3 -To close the loop, verify that the backup data has made its way to the configured S3 bucket. - -```command {title="List contents of test backup"} -s3cmd ls s3://$BUCKET_NAME/backups/test-backup/ -``` - -```output -2025-05-31 21:45:34 29 test-backup-csi-volumesnapshotclasses.json.gz -2025-05-31 21:45:33 29 test-backup-csi-volumesnapshotcontents.json.gz -2025-05-31 21:45:34 29 test-backup-csi-volumesnapshots.json.gz -2025-05-31 21:45:33 27 test-backup-itemoperations.json.gz -2025-05-31 21:45:33 23733 test-backup-logs.gz -2025-05-31 21:45:34 2481 test-backup-podvolumebackups.json.gz -2025-05-31 21:45:34 3022 test-backup-resource-list.json.gz -2025-05-31 21:45:34 49 test-backup-results.gz -2025-05-31 21:45:33 922 test-backup-volumeinfo.json.gz -2025-05-31 21:45:34 29 test-backup-volumesnapshots.json.gz -2025-05-31 21:45:33 138043 test-backup.tar.gz -2025-05-31 21:45:34 2981 velero-backup.json -``` +1. List the contents of `test-backup` to verify that the backup data made its way to the configured S3 bucket: + + ```command + aws s3 ls s3://$BUCKET_NAME/backups/test-backup/ + ``` + + The `velero-backup.json`, `test-backup.tar.gz`, `test-backup-podvolumebackups.json.gz`, and `test-backup-resource-list.json.gz` files confirm that metadata and PV data were uploaded: + + ```output + 2025-05-31 21:45:34 29 test-backup-csi-volumesnapshotclasses.json.gz + 2025-05-31 21:45:33 29 test-backup-csi-volumesnapshotcontents.json.gz + 2025-05-31 21:45:34 29 test-backup-csi-volumesnapshots.json.gz + 2025-05-31 21:45:33 27 test-backup-itemoperations.json.gz + 2025-05-31 21:45:33 23733 test-backup-logs.gz + 2025-05-31 21:45:34 2481 test-backup-podvolumebackups.json.gz + 2025-05-31 21:45:34 3022 test-backup-resource-list.json.gz + 2025-05-31 21:45:34 49 test-backup-results.gz + 2025-05-31 21:45:33 922 test-backup-volumeinfo.json.gz + 2025-05-31 21:45:34 29 test-backup-volumesnapshots.json.gz + 2025-05-31 21:45:33 138043 test-backup.tar.gz + 2025-05-31 21:45:34 2981 velero-backup.json + ``` ## Provision LKE Cluster -The persistent volume on your source cluster has been backed up using Velero. Now, provision your destination cluster on Akamai Cloud. There are several ways to create a Kubernetes cluster on Akamai Cloud. This guide uses the Linode CLI to provision resources. +With the persistent volume on your source cluster backed up with Velero, it's time to provision your destination cluster on Akamai Cloud. -See the [LKE documentation](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) for instructions on how to provision a cluster using Cloud Manager. +While there are several ways to create a Kubernetes cluster on Akamai Cloud, this guide uses the Linode CLI to provision resources. See the [LKE documentation](https://techdocs.akamai.com/cloud-computing/docs/create-a-cluster) for instructions on how to provision a cluster using Cloud Manager. -### See Available Kubernetes Versions +1. Use the Linode CLI (`linode-cli`) to list available Kubernetes versions: -Use the Linode CLI (`linode-cli`) to see available Kubernetes versions: + ```command + linode-cli lke versions-list + ``` -```command {title="List available Kubernetes versions"} -linode lke versions-list -``` + ```output + ┌──────┐ + │ id │ + ├──────┤ + │ 1.33 │ + ├──────┤ + │ 1.32 │ + └──────┘ + ``` -```output -┌──────┐ -│ id │ -├──────┤ -│ 1.32 │ -├──────┤ -│ 1.31 │ -└──────┘ -``` - -Unless specific requirements dictate otherwise, it’s generally recommended to provision the latest version of Kubernetes. + Unless specific requirements dictate otherwise, it’s generally recommended to provision the latest version of Kubernetes. ### Create a Cluster -Determine the type of Linode to provision. The examples in this guide use the g6-standard-2 Linode, which features two CPU cores and 4 GB of memory. Run the following command to create a cluster labeled `velero-to-lke` which uses the `g6-standard-2` Linode: - -```command {title="Create LKE cluster"} -lin lke cluster-create \ - --label velero-to-lke \ - --k8s_version 1.32 \ - --region us-sea \ - --node_pools '[{ - "type": "g6-standard-2", - "count": 1, - "autoscaler": { - "enabled": true, - "min": 1, - "max": 3 - } - }]' -``` +Determine the type of Linode to provision. The examples in this guide use the `g6-standard-2` Linode, which features two CPU cores and 4 GB of memory. + +2. Create an LKE cluster labeled `velero-to-lke` using the `g6-standard-2` Linode: + + ```command + linode-cli lke cluster-create \ + --label velero-to-lke \ + --k8s_version 1.33 \ + --region us-mia \ + --node_pools '[{ + "type": "g6-standard-2", + "count": 1, + "autoscaler": { + "enabled": true, + "min": 1, + "max": 3 + } + }]' + ``` + + ```output + ┌────────┬───────────────┬────────┬─────────────┬───────────────────────────────┬──────┐ + │ id │ label │ region │ k8s_version │ control_plane.high_availabil… │ tier │ + ├────────┼───────────────┼────────┼─────────────┼───────────────────────────────┼──────┤ + │ 463649 │ velero-to-lke │ us-mia │ 1.33 │ False │ │ + └────────┴───────────────┴────────┴─────────────┴───────────────────────────────┴──────┘ + ``` -```output -┌────────┬───────────────┬────────┬─────────────┐ -│ id │ label │ region │ k8s_version │ -├────────┼───────────────┼────────┼─────────────┤ -│ 463649 │ velero-to-lke │ us-sea │ 1.32 │ -└────────┴───────────────┴────────┴─────────────┘ -``` +### Access the Cluster -### Access the cluster +To access your cluster, fetch the cluster credentials as a `kubeconfig` file. -To access your cluster, fetch the cluster credentials as a `kubeconfig` file. Your cluster’s `kubeconfig` can also be [downloaded via the Cloud Manager](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#access-and-download-your-kubeconfig). Use the following command to retrieve the cluster’s ID: +3. Retrieve the cluster’s ID and set an environment variable: -```command {title="Retrieve cluster ID and set environment variable"} -CLUSTER_ID=$(linode lke clusters-list --json | \ - jq -r '.[] | select(.label == "velero-to-lke") | .id') -``` + ```command + CLUSTER_ID=$(linode-cli lke clusters-list --json | \ + jq -r '.[] | select(.label == "velero-to-lke") | .id') + ``` -Retrieve the `kubeconfig` file and save it to `\~/.kube/lke-config`: +1. Retrieve the `kubeconfig` file and save it to `~/.kube/lke-config`: -```command {title="Retrieve and save kubeconfig file"} -linode lke kubeconfig-view \ - --json "$CLUSTER_ID" \ - | jq -r '.[0].kubeconfig' \ - | base64 --decode > ~/.kube/lke-config -``` + ```command + linode-cli lke kubeconfig-view \ + --json "$CLUSTER_ID" \ + | jq -r '.[0].kubeconfig' \ + | base64 --decode > ~/.kube/lke-config + ``` -After saving the `kubeconfig`, access your cluster by using `kubectl` and specifying the file: +1. Use `kubectl` and specify the file to access your cluster: -```command {title="Use kubectl with kubeconfig to get nodes"} -kubectl get nodes --kubeconfig ~/.kube/lke-config -``` + ```command + kubectl get nodes --kubeconfig ~/.kube/lke-config + ``` -```output -NAME STATUS ROLES AGE VERSION -lke463649-678334-401dde8e0000 Ready 7m27s v1.32.1 -``` + ```output + NAME STATUS ROLES AGE VERSION + lke463649-678334-401dde8e0000 Ready 7m27s v1.33.0 + ``` + +{{< note >}} +Your cluster’s `kubeconfig` can also be [downloaded via the Cloud Manager](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-lke-linode-kubernetes-engine#access-and-download-your-kubeconfig). +{{< /note >}} ## Install Velero in LKE -If you are working in a different terminal session, ensure you have the environment variables for `BUCKET_NAME`, `REGION`, and `CREDENTIALS_FILE` with values identical to those earlier in this guide. In case you need to set them again, the command will look similar to: - -```command {title="Set environment variables"} -export BUCKET_NAME=velero-backup-7777 -export REGION=us-west-2 -export CREDENTIALS_FILE=~/aws-credentials-velero -``` - -Run the following command to install Velero in your LKE cluster: - -```command {title="Install Velero in LKE"} -velero install \ - --kubeconfig ~/.kube/lke-config \ - --provider aws \ - --plugins velero/velero-plugin-for-aws:v1.12.0 \ - --bucket "$BUCKET_NAME" \ - --secret-file $CREDENTIALS_FILE \ - --backup-location-config region=$REGION \ - --use-node-agent \ - --use-volume-snapshots=false \ - --default-volumes-to-fs-backup -``` - -Verify the Velero installation: - -```command {title="Verify the Velero installation"} -kubectl logs deployment/velero \ - -n velero \ - --kubeconfig ~/.kube/lke-config \ - | grep 'BackupStorageLocations is valid' -``` - -```output -Defaulted container "velero" out of: velero, velero-velero-plugin-for-aws (init) -time="2025-05-31T20:52:50Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:128" -``` - -With the backup storage location properly configured, run this command to get information about existing backups. - -```command {title="Get backups"} -velero backup get --kubeconfig ~/.kube/lke-config -``` - -```output -NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR -test-backup Completed 0 0 2025-05-31 21:44:31 +0300 IDT 29d default -``` +If you are working in a different terminal session, ensure you have the environment variables for `BUCKET_NAME`, `AWS_REGION`, and `CREDENTIALS_FILE` with values identical to those used earlier in this guide. + +1. Install Velero in your LKE cluster: + + ```command + velero install \ + --kubeconfig ~/.kube/lke-config \ + --provider aws \ + --plugins velero/velero-plugin-for-aws:v1.12.1 \ + --bucket "$BUCKET_NAME" \ + --secret-file "$CREDENTIALS_FILE" \ + --backup-location-config region=$AWS_REGION \ + --use-node-agent \ + --use-volume-snapshots=false \ + --default-volumes-to-fs-backup + ``` + + ```output + CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource + CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client + CustomResourceDefinition/backuprepositories.velero.io: created + CustomResourceDefinition/backups.velero.io: attempting to create resource + CustomResourceDefinition/backups.velero.io: attempting to create resource client + CustomResourceDefinition/backups.velero.io: created + CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource + CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client + CustomResourceDefinition/backupstoragelocations.velero.io: created + CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource + CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client + CustomResourceDefinition/deletebackuprequests.velero.io: created + CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource + CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client + CustomResourceDefinition/downloadrequests.velero.io: created + CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource + CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client + CustomResourceDefinition/podvolumebackups.velero.io: created + CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource + CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client + CustomResourceDefinition/podvolumerestores.velero.io: created + CustomResourceDefinition/restores.velero.io: attempting to create resource + CustomResourceDefinition/restores.velero.io: attempting to create resource client + CustomResourceDefinition/restores.velero.io: created + CustomResourceDefinition/schedules.velero.io: attempting to create resource + CustomResourceDefinition/schedules.velero.io: attempting to create resource client + CustomResourceDefinition/schedules.velero.io: created + CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource + CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client + CustomResourceDefinition/serverstatusrequests.velero.io: created + CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource + CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client + CustomResourceDefinition/volumesnapshotlocations.velero.io: created + CustomResourceDefinition/datadownloads.velero.io: attempting to create resource + CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client + CustomResourceDefinition/datadownloads.velero.io: created + CustomResourceDefinition/datauploads.velero.io: attempting to create resource + CustomResourceDefinition/datauploads.velero.io: attempting to create resource client + CustomResourceDefinition/datauploads.velero.io: created + Waiting for resources to be ready in cluster... + Namespace/velero: attempting to create resource + Namespace/velero: attempting to create resource client + Namespace/velero: created + ClusterRoleBinding/velero: attempting to create resource + ClusterRoleBinding/velero: attempting to create resource client + ClusterRoleBinding/velero: created + ServiceAccount/velero: attempting to create resource + ServiceAccount/velero: attempting to create resource client + ServiceAccount/velero: created + Secret/cloud-credentials: attempting to create resource + Secret/cloud-credentials: attempting to create resource client + Secret/cloud-credentials: created + BackupStorageLocation/default: attempting to create resource + BackupStorageLocation/default: attempting to create resource client + BackupStorageLocation/default: created + Deployment/velero: attempting to create resource + Deployment/velero: attempting to create resource client + Deployment/velero: created + DaemonSet/node-agent: attempting to create resource + DaemonSet/node-agent: attempting to create resource client + DaemonSet/node-agent: created + Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status. + ``` + +1. Verify the Velero installation: + + ```command + kubectl logs deployment/velero \ + -n velero \ + --kubeconfig ~/.kube/lke-config \ + | grep 'BackupStorageLocations is valid' + ``` + + ```output + Defaulted container "velero" out of: velero, velero-velero-plugin-for-aws (init) + time="2025-05-31T20:52:50Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:128" + ``` + +1. With the backup storage location properly configured, run the following command to retrieve information about existing backups: + + ```command + velero backup get --kubeconfig ~/.kube/lke-config + ``` + + ```output + NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR + test-backup Completed 0 0 2025-05-31 21:44:31 +0300 IDT 29d default + ``` ## Restore the Backup in LKE -Now, use Velero to restore your source cluster backup into your destination cluster at LKE. +1. Restore your source cluster backup into your destination LKE cluster: + + ```command + velero restore create test-restore \ + --from-backup test-backup \ + --kubeconfig ~/.kube/lke-config + ``` + + ```output + Restore request "test-restore" submitted successfully. + Run `velero restore describe test-restore` or `velero restore logs test-restore` for more details. + ``` + +1. Check the restore status: + + ```command + velero restore describe test-restore --kubeconfig ~/.kube/lke-config + ``` + + At this point, the restore should appear in the `InProgress` phase and cannot complete until the post-restore adjustments are made: + + ```output + Name: test-restore + Namespace: velero + Labels: + Annotations: + + Phase: InProgress + Estimated total items to be restored: 8 + Items restored so far: 8 + + Started: 2025-08-08 10:40:13 -0400 EDT + Completed: + + Backup: test-backup + + Namespaces: + Included: all namespaces found in the backup + Excluded: -```command {title="Use Velero to restore a backup"} -velero restore create test-restore \ - --from-backup test-backup \ - --kubeconfig ~/.kube/lke-config -``` + Resources: + Included: * + Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io, csinodes.storage.k8s.io, volumeattachments.storage.k8s.io, backuprepositories.velero.io + Cluster-scoped: auto -```output -Restore request "test-restore" submitted successfully. -Run `velero restore describe test-restore` or `velero restore logs test-restore` for more details. -``` + Namespace mappings: -Check the restore status with the following command: + Label selector: -```command {title="Check restore status"} -velero restore describe test-restore --kubeconfig ~/.kube/lke-config -``` + Or label selector: + + Restore PVs: auto + + kopia Restores (specify --details for more information): + New: 1 + + Existing Resource Policy: + ItemOperationTimeout: 4h0m0s + + Preserve Service NodePorts: auto + + Uploader config: + + ``` ## Post-Restore Adjustments -Because you are transitioning from one Kubernetes provider to another, you may need to make some final post-restore adjustments. - -For example, if your destination cluster is at LKE, you will want to update your PVC to use the Linode storage class. Review the Linode CSI drivers with the following command: - -```command {title="See current CSI drivers"} -kubectl get csidrivers --kubeconfig ~/.kube/lke-config -``` - -```output -NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE -ebs.csi.aws.com true false false false Persistent 22m -efs.csi.aws.com false false false false Persistent 22m -linodebs.csi.linode.com true true false false Persistent 69m -``` - -Review the available storage classes: - -```command {title="Review available storage classes"} -kubectl get storageclass --kubeconfig ~/.kube/lke-config -``` - -```output -NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE -ebs-sc ebs.csi.aws.com Delete WaitForFirstConsumer true 6h22m -gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 6h22m -linode-block-storage linodebs.csi.linode.com Delete Immediate true 7h9m -linode-block-storage-retain (default) linodebs.csi.linode.com Retain Immediate true 7h9m -``` +Because you are transitioning from one Kubernetes provider to another, you may need to make some final post-restore adjustments. For example, if your destination is LKE, you need to update your PVC to use the Linode storage class. -Use the default `linode-block-storage-retain` storage class. However, you must first delete the restored PVC and recreate it with the new storage class. +1. Review the Linode CSI drivers: -```command {title="Delete the restored PVC"} -kubectl delete pvc the-pvc --kubeconfig ~/.kube/lke-config -persistentvolumeclaim "the-pvc" deleted -``` + ```command + kubectl get csidrivers --kubeconfig ~/.kube/lke-config + ``` -```command {title="Recreate the PVC with the new storage class"} -echo ' -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: the-pvc -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Mi -' | kubectl apply -f - --kubeconfig ~/.kube/lke-config -``` + ```output + NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE + linodebs.csi.linode.com true true false false Persistent 69m + ``` -```output -persistentvolumeclaim/the-pvc created -``` +1. Review the available storage classes: -The new PVC is bound to a new persistent volume. Run the following command to see this: + ```command + kubectl get storageclass --kubeconfig ~/.kube/lke-config + ``` -```command {title="Get information about PVC, PV, and pod"} -kubectl get pvc,pv,pod --kubeconfig ~/.kube/lke-config -``` + ```output + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + linode-block-storage linodebs.csi.linode.com Delete Immediate true 7h9m + linode-block-storage-retain (default) linodebs.csi.linode.com Retain Immediate true 7h9m + ``` -```output -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE -persistentvolumeclaim/the-pvc Bound pvc-711d050fae7641ee 10Gi RWO linode-block-storage-retain 2m12s + Use the default `linode-block-storage-retain` storage class. However, you must first delete the restored PVC and recreate it with the new storage class. -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE -persistentvolume/pvc-711d050fae7641ee 10Gi RWO Retain Bound default/the-pvc linode-block-storage-retain 2m9s +1. Delete the restored PVC: -NAME READY STATUS RESTARTS AGE -pod/the-pod 0/1 Init:0/1 0 6h38m -``` + ```command + kubectl delete pvc the-pvc --kubeconfig ~/.kube/lke-config + ``` -Unfortunately, you'll see that the pod is in an `Init` state as it is trying to bind to the previous (and now invalid) PVC. You need to delete the pod, stop the blocked restore (by first deleting the finalizer), and re-run the restore. + ```output + persistentvolumeclaim "the-pvc" deleted + ``` -```command {title="Delete pod and stop the blocked restore"} -kubectl delete pod the-pod --kubeconfig ~/.kube/lke-config +1. Recreate the PVC with the new storage class: -kubectl patch restore test-restore \ - --patch '{"metadata":{"finalizers":[]}}' \ - --type merge \ - -n velero \ - --kubeconfig ~/.kube/lke-config + ```command + echo ' + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: the-pvc + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ' | kubectl apply -f - --kubeconfig ~/.kube/lke-config + ``` -kubectl delete restore test-restore \ - -n velero \ - --kubeconfig ~/.kube/lke-config -``` + ```output + persistentvolumeclaim/the-pvc created + ``` -Now, re-run the restore. Velero is smart enough to detect that the PVC (called `the-pvc`) exists and will not overwrite it unless explicitly requested to do so. +1. The new PVC is bound to a new persistent volume. To confirm this, run the following command to view PVC, PV, and pod information: -```command {title="Re-run the Velero restore"} -velero restore create test-restore \ - --from-backup test-backup \ - --kubeconfig ~/.kube/lke-config -``` + ```command + kubectl get pvc,pv,pod --kubeconfig ~/.kube/lke-config + ``` -```output -Restore request "test-restore" submitted successfully. -Run `velero restore describe test-restore` or `velero restore logs test-restore` for more details. -``` - -Verify your pod was restored. - -```command {title="Verify successful pod restore"} -kubectl get pod the-pod --kubeconfig ~/.kube/lke-config -``` - -```output -NAME READY STATUS RESTARTS AGE -the-pod 1/1 Running 0 118s -``` - -The pod is `Running`. Now, verify the volume is mounted and you can access on LKE the data that was written to the EBS volume on AWS. - -```command {title="Run the pod and show the sample data that was written"} -kubectl exec the-pod --kubeconfig ~/.kube/lke-config -- cat /data/some-data.txt -``` - -```output -Defaulted container "the-container" out of: the-container, restore-wait (init) -Some data -``` - -You have successfully performed an end-to-end backup and restore of a Kubernetes cluster (in this example, on AWS EKS) to a Linode LKE cluster, and this included persistent data migration across two different cloud object storage systems. + The pod is in an `Init` state as it is trying to bind to the previous (and now invalid) PVC: -## Final Considerations + ```output + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE + persistentvolumeclaim/the-pvc Bound pvc-711d050fae7641ee 10Gi RWO linode-block-storage-retain 2m12s + + NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE + persistentvolume/pvc-711d050fae7641ee 10Gi RWO Retain Bound default/the-pvc linode-block-storage-retain 2m9s + + NAME READY STATUS RESTARTS AGE + pod/the-pod 0/1 Init:0/1 0 6h38m + ``` + +1. Delete the stuck pod: + + ```command + kubectl delete pod the-pod --kubeconfig ~/.kube/lke-config + ``` + + The output may appear to freeze for a moment while executing this command: + + ```output + pod "the-pod" deleted + ``` + +1. Remove the finalizers on the stuck restore: + + ```command + kubectl patch restore test-restore \ + --patch '{"metadata":{"finalizers":[]}}' \ + --type merge \ + -n velero \ + --kubeconfig ~/.kube/lke-config + ``` -As you pursue this kind of migration, keep in mind the following important considerations. + ```output + restore.velero.io/test-restore patched + ``` -### Persistent data movements modes +1. Delete the now un-stuck restore: -Velero supports both CSI snapshots as well as file system backup using Kopia. When restoring from a backup into a cluster of the same Kubernetes provider, it is recommended to use Velero's [CSI snapshots mode](https://velero.io/docs/main/csi/). This takes advantage of the Kubernetes CSI volume snapshots API and only requires that the same CSI driver is installed in the source and destination clusters. + ```command + kubectl delete restore test-restore \ + -n velero \ + --kubeconfig ~/.kube/lke-config + ``` + + ```output + restore.velero.io "test-restore" deleted + ``` + +1. Re-run the Velero restore: + + ```command + velero restore create test-restore \ + --from-backup test-backup \ + --kubeconfig ~/.kube/lke-config + ``` + + ```output + Restore request "test-restore" submitted successfully. + Run `velero restore describe test-restore` or `velero restore logs test-restore` for more details. + ``` + + {{< note >}} + Velero can detect that the PVC (`the-pvc`) exists and does not overwrite it unless explicitly requested to do so. + {{< /note >}} + +1. Verify that your pod was restored: + + ```command + kubectl get pod the-pod --kubeconfig ~/.kube/lke-config + ``` + + The pod status should now be `Running`: + + ```output + NAME READY STATUS RESTARTS AGE + the-pod 1/1 Running 0 118s + ``` + +1. Run the pod to verify that the sample data was written: + + ```command + kubectl exec the-pod --kubeconfig ~/.kube/lke-config -- cat /data/some-data.txt + ``` + + ```output + Defaulted container "the-container" out of: the-container, restore-wait (init) + Some data + ``` + +You have successfully performed an end-to-end backup and restore of a Kubernetes cluster from AWS EKS to LKE. This included persistent data migration across two different cloud object storage systems. + +## Final Considerations -The file system backup mode used in this walkthrough is the best option when the source and destination Kubernetes providers are incompatible. +Keep these points in mind as you plan and execute the migration. -### ConfigMaps, secrets, and certificates +### Persistent Data Movements Modes -Secrets and certificates are often tied to the cloud provider. Velero will restore any Kubernetes Secret resource. However, if (for example) the Secret is used to access AWS services that were replaced by equivalent LKE services, then it would be unnecessary to migrate them. The same applies to ConfigMaps that may contain cloud-provider specific configuration. +Velero supports two approaches: -### Downtime planning +- **[CSI snapshots](https://velero.io/docs/main/csi/)**: Recommended when backing-up and restoring into a cluster of the same Kubernetes provider. This takes advantage of the Kubernetes CSI volume snapshots API and only requires that the same CSI driver is installed in the source and destination clusters. +- **File-system backups via Kopia**: Used in this walkthrough. This is the best option when the source and destination Kubernetes providers are incompatible. -Velero doesn't offer any special capabilities for facilitating zero-downtime migrations. A safe backup and restore will require blocking all or most traffic to the cluster. If you restore from a stale backup, then you either lose data or you will need to backfill data from the old cluster later. +### ConfigMaps, Secrets, and Certificates -When downtime is unavoidable, then a safer approach is to schedule it. Perform a backup and immediately restore it to the new cluster. +Velero can restore any Kubernetes Secret resource. However, Secrets and certificates are often tied to the cloud provider. If the Secret is used to access AWS services that were replaced by equivalent LKE services, then it would be unnecessary to migrate them. The same applies to ConfigMaps that may contain provider-specific configuration. -### Other use case: backups for multi-cloud architectures +### Downtime Planning -While this guide focuses on migration, Velero can also support a multi-cloud Kubernetes strategy. By configuring Velero with backup locations across multiple cloud providers, you could: +Velero doesn't offer zero-downtime migrations. Expect to block all or most traffic to the cluster during the backup/restore. Restoring from a stale backup means possible data loss or back-filling gaps in data later. -- Create a resilient disaster recovery setup by backup up workloads from one cluster and restoring them into another in a different cloud. -- Enable workload portability between environments, which may be helpful for hybrid deployments or to meet data redundancy requirements for compliance reasons. +When downtime is unavoidable, it's safer to schedule it. Perform a backup and immediately restore it to the new cluster. -The resources below are provided to help you become familiar with Velero when migrating your Kubernetes cluster to Linode LKE. +### Other Use Case: Backups for Multi-Cloud Architectures -## Additional Resources +While this guide focuses on migration, Velero also supports multi-cloud strategies. By configuring Velero with backup locations across multiple cloud providers, you can: -- Velero: - - [Documentation Home]([https://velero.io/docs/v1.16/](https://velero.io/docs/v1.16/)) - - [Installing the Velero CLI](https://velero.io/docs/v1.16/basic-install/#install-the-cli) - - [Storage provider plugins][https://velero.io/docs/v1.16/supported-providers/] -- Akamai Cloud: - - [Linode LKE](https://techdocs.akamai.com/cloud-computing/docs/linode-kubernetes-engine) - - [Migrating from AWS EKS to Linode Kubernetes Engine (LKE)](/docs/guides/migrating-from-aws-eks-to-linode-kubernetes-engine-lke/) - - [Migrating from Azure AKS to Linode Kubernetes Engine (LKE)](/docs/guides/migrating-from-azure-aks-to-linode-kubernetes-engine-lke/) - - [Migrating from Google GKE to Linode Kubernetes Engine (LKE)](/docs/guides/migrating-from-google-gke-to-linode-kubernetes-engine-lke/) \ No newline at end of file +- Back up workloads from one cluster and restore them in a different cloud for resilience. +- Enable workload portability between environments for hybrid deployments or to meet data redundancy requirements for compliance reasons. \ No newline at end of file