Jump to main content

MicroK8s CAPI - cluster upgrades

In this guide, we will provide a comprehensive overview of upgrading a MicroK8s cluster that is managed by Cluster API. We will demonstrate how to upgrade a cluster from one minor version to another, and we will also cover how to pin a cluster to a specific patch release using a snapstore proxy.

Minor version upgrade

Kubernetes is an ever-evolving platform that ships one minor release every approximately 4 months. With each new release comes new features, bug fixes, and performance improvements, making upgrading a Kubernetes cluster a common and important task. In the Cluster Provisioning with CAPI guide, we provide a detailed walkthrough of how to generate a cluster manifest. This manifest includes two main sections: MicroK8sControlPlane and MachineDeployment. The MicroK8sControlPlane section specifies the version of Kubernetes that the control plane is running, while the MachineDeployment section specifies the version of Kubernetes that the worker nodes are running. In the snippets that follow, we provide an example of how to set the Kubernetes version to 1.25.0 in both the MicroK8sControlPlane and MachineDeployment sections of the cluster manifest:

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
metadata:
  name: test-ci-cluster-control-plane
  namespace: default
spec:
  controlPlaneConfig:
    clusterConfiguration:
      portCompatibilityRemap: true
    initConfiguration:
      IPinIP: true
      addons:
      - dns
      joinTokenTTLInSecs: 900000
  machineTemplate:
    infrastructureTemplate:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: AWSMachineTemplate
      name: test-ci-cluster-control-plane
  replicas: 1
  version: v1.25.0
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: test-ci-cluster-md-0
  namespace: default
spec:
  clusterName: test-ci-cluster
  replicas: 1
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
          kind: MicroK8sConfigTemplate
          name: test-ci-cluster-md-0
      clusterName: test-ci-cluster
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta
        kind: AWSMachineTemplate
        name: test-ci-cluster-md-0
      version: 1.25.0

To update a cluster to a new version ammend the cluster manifest with the desired version and re-apply the manifest:

sudo microk8s kubectl apply cluster.yaml

When upgrading a MicroK8s cluster managed by CAPI, the bootstrap and control-plane providers will automatically detect the new version and initiate the update process.
Single-node control plane clusters only support updating in-place, with minimal downtime for kube-apiserver (during the service restart). In-place upgrades are handled by spawning a pod in the cluster. This pod might have to be manually deleted at the end.
For worker nodes, the upgrade strategy is a rolling one. The CAPI provider spawns a node with the new version, selects one of the old nodes, drains it, and then removes it from the cluster. This process is repeated until all worker nodes have been replaced by nodes running the new version. This rolling upgrade strategy is also the default strategy for upgrading control-planes with more than 3 nodes that operate in HA mode. In HA clusters, rolling upgrades of the control-plane nodes do not cause any service disruption. However, in non-HA clusters, a rolling upgrade strategy will result in downtime, and an in-place strategy is followed instead. In an in-place upgrade, the CAPI provider will cycle through all control-plane nodes and simply refresh the snap to the new version. To try the in-place upgrade in an HA cluster, you can set the respective flag in the MicroK8sControlPlane section of the cluster manifest:

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
metadata:
  name: test-ci-cluster-control-plane
  namespace: default
spec:
  controlPlaneConfig:
    clusterConfiguration:
      portCompatibilityRemap: true
    initConfiguration:
      IPinIP: true
      addons:
      - dns
      joinTokenTTLInSecs: 900000
  machineTemplate:
    infrastructureTemplate:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: AWSMachineTemplate
      name: test-ci-cluster-control-plane
  replicas: 1
  version: v1.25.0
  upgradeStrategy: "InPlaceUpgrade"

Patch releases

By default, MicroK8s will automatically update to the latest patch version within the track it follows. This means that the patch number specified in the cluster manifest is ignored. For example, if the version field in the manifest specifies v1.25.0, the trailing patch number 0 is ignored, and if the snapstore holds the v1.25.4 version in the 1.25 track, then that is the version that will be used. However, there may be cases where you want to pin the deployment to a specific version. In such cases, you will need to set up a snapstore proxy following the official instructionss. If you are using an Ubuntu machine, setting up a snapstore proxy is done with:

sudo snap install snap-store-proxy
sudo apt install postgresql

Get the IP or host endpoint and put it in:

sudo snap-proxy config proxy.domain=<IP or host domain>

Configure postgress with:

$ cat ./ps.sql 
CREATE ROLE "snapproxy-user" LOGIN CREATEROLE PASSWORD 'snapproxy-password';

CREATE DATABASE "snapproxy-db" OWNER "snapproxy-user";

\connect "snapproxy-db"

CREATE EXTENSION "btree_gist";

and apply the configuration:

sudo -u postgres psql < ps.sql 
sudo snap-proxy config proxy.db.connection="postgresql://snapproxy-user@localhost:5432/snapproxy-db"

Register the proxy:

sudo snap-proxy register

Get the store proxy ID and endpoint with:

snap-proxy status

The snapstore proxy domain and the store ID need to set on the cluster spec manifest. In the example that follows we have setup a snapstore proxy on an AWS VM instance ec2-3-231-147-126.compute-1.amazonaws.com:

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
metadata:
  name: test-ci-cluster-control-plane
  namespace: default
spec:
  controlPlaneConfig:
    clusterConfiguration:
      portCompatibilityRemap: true
    initConfiguration:
      IPinIP: true
      snapstoreProxyDomain: "ec2-3-231-147-126.compute-1.amazonaws.com"
      snapstoreProxyId: "zIozFdcA7qd1eDWh3Fzfdsadsadsa"
      addons:
      - dns
      - ingress
      joinTokenTTLInSecs: 900000
  machineTemplate:
    infrastructureTemplate:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: AWSMachineTemplate
      name: test-ci-cluster-control-plane
  replicas: 1
  version: v1.25.0

and

apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: MicroK8sConfigTemplate
metadata:
  name: test-ci-cluster-md-0
  namespace: default
spec:
  template:
    spec:
      clusterConfiguration:
        portCompatibilityRemap: true
      initConfiguration:
        snapstoreProxyDomain: "ec2-3-231-147-126.compute-1.amazonaws.com"
        snapstoreProxyId: "zIozFdcA7qd1eDWh3Fzfdsadsadsa"

The snapstore proxy allows for snap revision overrides. Please visit the snapstore proxy documentation pages for the full list of offered features.

Last updated 5 months ago. Help improve this document in the forum.