Jump to main content

Addon: OpenEBS Mayastor clustered storage

1.24
Compatibility: amd64 arm64 classic
Source: Mayastor

MicroK8s supports a cluster-ready replicated storage solution based on OpenEBS Mayastor. This does require some initial setup and configuration, as detailed below.

Requirements

Note: These requirements apply to ALL the nodes in a MicroK8s cluster. Please run the commands on each node.

  1. HugePages must be enabled. Mayastor requires at least 1024 4MB HugePages.

    This can be achieved by running the following commands on each host:

    sudo sysctl vm.nr_hugepages=1024
    echo 'vm.nr_hugepages=1024' | sudo tee -a /etc/sysctl.conf
    
  2. The nvme_fabrics and nvme_tcp modules are required on all hosts. Install the modules with:

    sudo apt install linux-modules-extra-$(uname -r)
    

    Then enable them with:

    sudo modprobe nvme_tcp
    echo 'nvme-tcp' | sudo tee -a /etc/modules-load.d/microk8s-mayastor.conf
    
  3. You should restart MicroK8s at this point:

    microk8s stop
    microk8s start
    
  4. The MicroK8s DNS and Helm3 addons. These will be automatically installed if missing.

Installation

Assuming you have configured your cluster as mentioned above, you can now enable Mayastor.

  1. Enable the Mayastor addon:

    sudo microk8s enable core/mayastor --default-pool-size 20G
    
  2. Wait for the mayastor control plane and data plane pods to come up:

    sudo microk8s.kubectl get pod -n mayastor
    
  3. The mayastor addon will automatically create on MayastorPool per node in the MicroK8s cluster. This pool is backed by a sparse image file. Refer to the Mayastor documentation for information on using existing block devices.

    Verify that all mayastorpools are up and running with:

    sudo microk8s.kubectl get mayastorpool -n mayastor
    

    In a 3-node cluster, the output should look like this:

    NAME               NODE   STATUS   CAPACITY      USED   AVAILABLE
    microk8s-m2-pool   m2     Online   21449670656   0      21449670656
    microk8s-m1-pool   m1     Online   21449670656   0      21449670656
    microk8s-m3-pool   m3     Online   21449670656   0      21449670656
    

Mayastor is now deployed!

Deploy a test workload

The mayastor addon creates two storage classes:

  • mayastor: This can be used in single-node clusters.
  • mayastor-3: This requires at least 3 cluster nodes, as it replicates volume data across 3 storage pools, ensuring data redundancy.

Let’s create a simple pod that uses the mayastor storage class:

# pod-with-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  storageClassName: mayastor
  accessModes: [ReadWriteOnce]
  resources: { requests: { storage: 5Gi } }
---
apiVersion: v1
kind: Pod
metadata:
  name: test-nginx
spec:
  volumes:
    - name: pvc
      persistentVolumeClaim:
        claimName: test-pvc
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
      volumeMounts:
        - name: pvc
          mountPath: /usr/share/nginx/html

Then, create the pod with:

sudo microk8s.kubectl create -f pod-with-pvc.yaml

Verify that our PVC and pod have been created with:

sudo microk8s.kubectl get pod,pvc

The output should look like this:

NAME             READY   STATUS    RESTARTS   AGE
pod/test-nginx   1/1     Running   0          4m

NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-pvc   Bound    pvc-e280b734-3224-4af3-af0b-e7ad3c4e6d79   5Gi        RWO            mayastor       4m

Configure storage classes

For advanced use-cases, it is possible to define a custom storage class and configure parameters for the number of replicas, the underlying protocol etc. For example, to define a storage class with 2 replicas, execute the following:

microk8s kubectl apply -f - <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: mayastor-2
parameters:
  repl: '2'
  protocol: 'nvmf'
  ioTimeout: '60'
  local: 'true'
provisioner: io.openebs.csi-mayastor
volumeBindingMode: WaitForFirstConsumer
EOF

For more information, see Create Mayastor StorageClass(s)

Configure MayaStor pools

By default, the MayaStor addon will create one pool per node, backed by a local image file. For production use, it is recommended that you instead use designated disks.

For convenience, a helper script is provided to easily create, list and delete mayastor pools from the cluster:

Examples:

# get help
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/pools.py --help
'

# create a mayastor pool using `/dev/sdb` on node `uk8s-1`
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/pools.py add --node uk8s-1 --device /dev/sdb
'

# create a mayastor pool of 100GB using a sparse image file on node `uk8s-1`. The image file will be placed under `/var/snap/microk8s/common/mayastor/data`.
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/pools.py add --node uk8s-1 --size 100GB
'

# list mayastor pools
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/pools.py list
'

# delete a mayastor pool. --force removes it even if the pool is in use, --purge removes the backing image file
# the mayastor pool name is required, as it appears in the output of the list command
sudo snap run --shell microk8s -c '
  $SNAP_COMMON/addons/core/addons/mayastor/pools.py remove microk8s-uk8s-1-pool --force --purge
'

For more information, see Create Mayastor Pool(s).

Troubleshooting

Unable to start mayastor data plane

Verify

Depending on the underlying hardware, starting the mayastor data plane pods may get stuck in CrashLoopBackOff state. This can be due to failing to initialize EAL. Verify this by checking the logs of the daemonset with the following command…

microk8s.kubectl logs -n mayastor daemonset/mayastor

… and check that the logs contain an error message similar to this:

EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask
EAL: alloc_pages_on_heap(): Please try initializing EAL with --iova-mode=pa parameter
EAL: error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
thread 'main' panicked at 'Failed to init EAL', mayastor/src/core/env.rs:543:13
stack backtrace:
0: std::panicking::begin_panic
1: mayastor::core::env::MayastorEnvironment::init
2: mayastor::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

For more details, see canonical/microk8s-core-addons#25.

Solution

Edit the manifest of the mayastor daemonset with:

microk8s kubectl edit -n mayastor daemonset mayastor

Then, edit the command line of the mayastor pod by adding the following argument --env-context=--iova-mode=pa. Save and exit the editor to apply the changes, then wait for the mayastor pods to restart.

Last updated 1 year, 9 months ago. Help improve this document in the forum.