Jump to main content

HowTo setup MicroK8s with (Micro)Ceph storage

With the 1.28 release, we introduced a new rook-ceph addon that allows users to easily setup, import, and manage Ceph deployments via rook.

In this guide we show how to setup a Ceph cluster with MicroCeph, give it three virtual disks backed up by local files, and import the Ceph cluster in MicroK8s using the rook-ceph addon.

Install MicroCeph

MicroCeph is a lightweight way of deploying a Ceph cluster with a focus on reduced ops. It is distributed as a snap and thus it gets deployed with:

sudo snap install microceph --channel=latest/edge

First, we need to bootstrap the Ceph cluster:

sudo microceph cluster bootstrap

In this guide, we do not cluster multiple nodes. The interested reader can look into the official docs on how to form a multinode Ceph cluster with MicroCeph.

At this point we can check the status of the cluster and query the list of available disks that should be empty. The disk status is queried with:

sudo microceph.ceph status                                                                                                                                                                                        

Its output should look like:

  cluster:                                                                                                                                                                                                                                  
    id:     b5205159-8092-4be4-9f26-8176c397c929                                                                                                                                                                                            
    health: HEALTH_OK                                                                                                                                                                                                                       
                                                                                                                                                                                                                                            
  services:                                                                                                                                                                                                                                 
    mon: 1 daemons, quorum ip-172-31-3-156 (age 22s)                                                                                                                                                                                        
    mgr: ip-172-31-3-156(active, since 14s)                                                                                                                                                                                                 
    osd: 0 osds: 0 up, 0 in                                                                                                                                                                                                                 
                                                                                                                                                                                                                                            
  data:                                                                                                                                                                                                                                     
    pools:   0 pools, 0 pgs                                                                                                                                                                                                                 
    objects: 0 objects, 0 B                                                                                                                                                                                                                 
    usage:   0 B used, 0 B / 0 B avail                                                                                                                                                                                                      
    pgs:                                                                                                                                                                                                                                    

The disk list is shown with:

sudo microceph disk list                                                                    

In our empty cluster the disks list should be:

Disks configured in MicroCeph:                                                                                        
+-----+----------+------+                                  
| OSD | LOCATION | PATH |                                                                                             
+-----+----------+------+                                                                                             
                                                                                                                      
Available unpartitioned disks on this system:                                                                         
+-------+----------+------+------+                                                                                                                                                                                                          
| MODEL | CAPACITY | TYPE | PATH |                                                                                    
+-------+----------+------+------+                                            

Add virtual disks

The following loop creates three files under /mnt that will back respective loop devices. Each Virtual disk is then added as an OSD to Ceph:

for l in a b c; do
  loop_file="$(sudo mktemp -p /mnt XXXX.img)"
  sudo truncate -s 1G "${loop_file}"
  loop_dev="$(sudo losetup --show -f "${loop_file}")"
  # the block-devices plug doesn't allow accessing /dev/loopX
  # devices so we make those same devices available under alternate
  # names (/dev/sdiY)
  minor="${loop_dev##/dev/loop}"
  sudo mknod -m 0660 "/dev/sdi${l}" b 7 "${minor}"
  sudo microceph disk add --wipe "/dev/sdi${l}"
done

At this point the disks show show up in the sudo microceph.ceph status command:

  cluster:
    id:     b5205159-8092-4be4-9f26-8176c397c929
    health: HEALTH_OK
  
  services:
    mon: 1 daemons, quorum ip-172-31-3-156 (age 115s)
    mgr: ip-172-31-3-156(active, since 107s)
    osd: 3 osds: 3 up (since 25s), 3 in (since 29s)
  
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   25 MiB used, 3.0 GiB / 3 GiB avail
    pgs:     1 active+clean

And the sudo microceph disk list:

Disks configured in MicroCeph:
+-----+-----------------+-----------+
| OSD |    LOCATION     |   PATH    |
+-----+-----------------+-----------+
| 0   | ip-172-31-3-156 | /dev/sdia |
+-----+-----------------+-----------+
| 1   | ip-172-31-3-156 | /dev/sdib |
+-----+-----------------+-----------+
| 2   | ip-172-31-3-156 | /dev/sdic |
+-----+-----------------+-----------+

Available unpartitioned disks on this system:
+-------+----------+------+------+
| MODEL | CAPACITY | TYPE | PATH |
+-------+----------+------+------+

It is worth looking into customizing your Ceph setup at this point. Here, as this cluster is a local one and is going to be used by a local MicroK8s deployment we set the replica count to be 2, we disable manager redirects, and we set the bucket type to use for chooseleaf in a CRUSH rule to 0:

sudo microceph.ceph config set global osd_pool_default_size 2                               
sudo microceph.ceph config set mgr mgr_standby_modules false                                                                                                                                                      
sudo microceph.ceph config set osd osd_crush_chooseleaf_type 0

Refer to the Ceph docs to shape the cluster according to your needs.

Connect MicroCeph to MicroK8s

The rook-ceph addon first appeared with the 1.28 release, so we should select a MicroK8s deployment channel greater or equal to 1.28:

sudo snap install microk8s --channel=1.28/stable
sudo microk8s status --wait-ready

Note: Before enabling the rook-ceph addon on a strictly confined MicroK8s, make sure the rbd kernel module is loaded with sudo modprobe rbd.

The output message of enabling the addon, sudo microk8s enable rook-ceph, describes what the next steps should be to import a Ceph cluster:

Infer repository core for addon rook-ceph                                                                                                                                                                                                   
Add Rook Helm repository https://charts.rook.io/release                                                                                                                                                                                     
"rook-release" has been added to your repositories                                                                                                                                                                                          
...
=================================================

Rook Ceph operator v1.11.9 is now deployed in your MicroK8s cluster and
will shortly be available for use.

As a next step, you can either deploy Ceph on MicroK8s, or connect MicroK8s with an
existing Ceph cluster.

To connect MicroK8s with an existing Ceph cluster, you can use the helper command
'microk8s connect-external-ceph'. If you are running MicroCeph on the same node, then
you can use the following command:

    sudo microk8s connect-external-ceph

Alternatively, you can connect MicroK8s with any external Ceph cluster using:

    sudo microk8s connect-external-ceph \
        --ceph-conf /path/to/cluster/ceph.conf \
        --keyring /path/to/cluster/ceph.keyring \
        --rbd-pool microk8s-rbd

For a list of all supported options, use 'microk8s connect-external-ceph --help'.

To deploy Ceph on the MicroK8s cluster using storage from your Kubernetes nodes, refer
to https://rook.io/docs/rook/latest-release/CRDs/Cluster/ceph-cluster-crd/

As we have already setup MicroCeph having it managed by rook is done with just:

sudo microk8s connect-external-ceph

At the end of this process you should have a storage class ready to use:

NAME       PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-rbd   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   3h38m

Further reading

Last updated 1 year, 1 month ago. Help improve this document in the forum.