Velero is a popular open source backup solution for Kubernetes. Its core implementation is a controller running in the cluster that oversees the backup and restore operations. The administrator is given a CLI tool to schedule operations and/or perform on-demand backup and restores. This CLI tool creates Kubernetes resources that the in-cluster Velero controller acts upon. During installation the controller needs to be configured with a repository (called a ‘provider’), where the backup files are stored.
This document describes how to setup Velero with the MinIO provider acting as an S3 compatible object store.
Prerequisites
Enabling required components
DNS and helm are needed for this setup:
sudo microk8s enable dns
sudo microk8s enable helm3
Install MinIO
MinIO provides an S3 compatible interface over storage provisioned by Kubernetes. For the purposes of this guide, the hostpath storage
add-on is used to satisfy the persistent volume claims:
sudo microk8s enable hostpath-storage
Helm is used to setup MinIO under the velero
namespace:
sudo microk8s kubectl create namespace velero
sudo microk8s helm3 repo add minio https://helm.min.io
sudo microk8s helm3 install -n velero --set buckets[0].name=velero,buckets[0].policy=none,buckets[0].purge=false minio minio/minio
Create a demo workload
The workload we will demonstrate the backup with is an NGINX deployment and a corresponding service under the workloads
namespace. Create this setup with:
sudo microk8s kubectl create namespace workloads
sudo microk8s kubectl create deployment nginx -n workloads --image nginx
sudo microk8s.kubectl expose deployment nginx -n workloads --port 80
Installing Velero
To install Velero we get the a binary from the releases page on github and place it in our PATH
. In this case we install the v1.7.1 Linux binary for AMD64 under /usr/local/bin
:
wget https://github.com/vmware-tanzu/velero/releases/download/v1.7.1/velero-v1.7.1-linux-amd64.tar.gz
tar -xzf velero-v1.7.1-linux-amd64.tar.gz
chmod +x velero-v1.7.1-linux-amd64/velero
sudo chown root:root velero-v1.7.1-linux-amd64/velero
sudo mv velero-v1.7.1-linux-amd64/velero /usr/local/bin/velero
Before installing Velero, we export the kubeconfig file from MicroK8s.
mkdir -p $HOME/.kube
sudo microk8s config > $HOME/.kube/config
We also export the MinIO credentials so we can feed them to Velero.
ACCESS_KEY=$(sudo microk8s kubectl -n velero get secret minio -o jsonpath="{.data.accesskey}" | base64 --decode)
SECRET_KEY=$(sudo microk8s kubectl -n velero get secret minio -o jsonpath="{.data.secretkey}" | base64 --decode)
cat <<EOF > credentials-velero
[default]
aws_access_key_id=${ACCESS_KEY}
aws_secret_access_key=${SECRET_KEY}
EOF
We are now ready to install Velero:
velero install \
--use-restic \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.3.0 \
--bucket velero \
--secret-file ./credentials-velero \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000 \
--snapshot-location-config region=minio
Velero uses Restic for backing up Kubernetes volumes. To let Restic know of the kubelet directory in the MicroK8s context we need to patch its daemonset manifest:
sudo microk8s kubectl -n velero patch daemonset.apps/restic --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/volumes/0/hostPath/path", "value":"/var/snap/microk8s/common/var/lib/kubelet/pods"}]'
Backup workloads
To backup the workloads
namespace we use the --include-namespaces
argument:
velero backup create workloads-backup --include-namespaces=workloads
Note: Please, consult the official Velero documentation on how to backup persistent volumes, the supported volume types and the limitations on hostpath.
To check the progress of a backup operation we use describe
, providing the backup name:
velero backup describe workloads-backup
In the output you should see this operation completed:
Name: workloads-backup
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.23.3-2+3cea96839f0d64
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=23+
Phase: Completed
Errors: 0
Warnings: 0
Namespaces:
Included: workloads
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1.1.0
Started: 2022-02-08 10:44:08 +0200 EET
Completed: 2022-02-08 10:44:10 +0200 EET
Expiration: 2022-03-10 10:44:08 +0200 EET
Total items to be backed up: 17
Items backed up: 17
Velero-Native Snapshots: <none included>
Restore workloads
Before restoring the workloads namespace, let’s delete it first:
sudo microk8s.kubectl delete namespace workloads
We can now create a restore operation specifying the backup we want to use:
velero restore create --from-backup workloads-backup
A restore operation which we can monitor using the describe command is then created:
velero restore describe workloads-backup-20220208105156
The describe
output should eventually report a “Completed” phase:
Name: workloads-backup-20220208105156
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Completed
Total items to be restored: 10
Items restored: 10
Started: 2022-02-08 10:51:56 +0200 EET
Completed: 2022-02-08 10:51:57 +0200 EET
Backup: workloads-backup
Namespaces:
Included: all namespaces found in the backup
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Preserve Service NodePorts: auto
Listing the resources of the workloads
namespaces confirms that the restoration process was successful:
sudo microk8s kubectl get all -n workloads
Summing up
Although Velero is a really powerful tool with a large set of configuration options it is also very easy to use. You are required to set up a backup strategy based on the backend that will hold the backups and the scheduling of the backups. The rest is taken care of by the tool itself.