Jump to main content

Configure OIDC with Dex for a MicroK8s cluster


This guide is intended for development purposes only, please refer to the official Dex documentation for guidance on how to secure the deployment and configure additional connectors/identity providers.

Dex will be deployed on top of the MicroK8s cluster, and exposed as a simple NodePort service.

This how-to will guide you through the following steps:

  • Install MicroK8s
  • Generate a self-signed certificate for Dex
  • Deploy Dex on MicroK8s
  • Configure MicroK8s API server to connect to Dex
  • Generate a kubeconfig file for clients authenticating via OIDC
  • Onboard a new client
  • Configure RBAC (Optional)

Install MicroK8s

Install the latest version of MicroK8s with the following command:

sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
newgrp -

(it may be necessary to restart your session for the user to be added to the group).

Generate self-signed CA and certificates for Dex

Dex can be served over HTTP or HTTPS, but kube-apiserver only accepts issuers that are served over HTTPS for security reasons. This means that we will need a CA certificate, as well as certificates for our Dex server.

We will use the following script for this purpose. Replace andromeda and with the IP address and/or the hostname of your MicroK8s cluster. Save it as certificates.sh

NOTE: The IP address will be used in a few places. If you are testing and MicroK8s is running locally, you can stick to localhost instead.

NOTE: For production systems, consider using Cert Manager and Let’s Encrypt certificates instead.


mkdir -p ssl

cat << EOF > ssl/req.cnf
req_extensions = v3_req
distinguished_name = req_distinguished_name


[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

DNS.1 = andromeda
IP.1 =

openssl genrsa -out ssl/ca.key 4096
openssl req -x509 -new -nodes -key ssl/ca.key -days 3650 -out ssl/ca.crt -subj "/CN=kube-ca"

openssl genrsa -out ssl/tls.key 4096
openssl req -new -key ssl/tls.key -out ssl/tls.csr -subj "/CN=kube-ca" -config ssl/req.cnf
openssl x509 -req -in ssl/tls.csr -CA ssl/ca.crt -CAkey ssl/ca.key -CAcreateserial -out ssl/tls.crt -days 3650 -extensions v3_req -extfile ssl/req.cnf

Then, build your certificates with:

chmod +x ./certificates.sh

The following 4 files will be created:

  • ssl/ca.key: This is the private key of our CA. It is used to sign new certificates. We will not need it for the rest of this tutorial, but make sure to keep it in a safe place.
  • ssl/ca.crt: This is the root CA certificate. kube-apiserver will use this when connecting to Dex.
  • ssl/tls.key: This is the private key that will be used by Dex to serve HTTPS traffic.
  • ssl/tls.crt: This is the certificate that will be used by Dex to serve HTTPS traffic.

Next, we will create a Kubernetes TLS secret named dex-certs, containing the certificate and key for Dex:

microk8s kubectl create secret tls dex-certs --cert=ssl/tls.crt --key=ssl/tls.key

Deploy Dex

As mentioned in the beginning, we will run Dex as a simple Deployment on our MicroK8s cluster, using the official Helm Chart. Refer to the Dex documentation for more details on deploying Dex.

Make sure to replace with the IP address of your MicroK8s cluster, as in the previous step when generating the certificates.

NOTE: The dex configuration below only defines a static user admin@example.com with password password. Adding more connectors (e.g. via LDAP, Keystone, etc) is outside the scope of this guide. Please refer to the Dex documentation instead.

# config.yaml
- name: certs
    secretName: dex-certs
- name: certs
  readOnly: true
  mountPath: /certs
  enabled: true
  type: NodePort
      nodePort: 31000

# Dex configuration
    type: memory
    tlsCert: /certs/tls.crt
    tlsKey: /certs/tls.key
  - name: Kubernetes
    id: kubernetes
    secret: super-safe-client-secret
    - http://localhost:8000  # for kubelogin
  enablePasswordDB: true
  - email: "admin@example.com"
    # bcrypt hash of the string "password": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2)
    hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W"
    username: "admin"
    userID: "08a8684b-db88-4b73-90a9-3cd1661f5466"

Deploy Dex with:

microk8s enable helm3
microk8s.helm3 repo add dex https://charts.dexidp.io
microk8s.helm3 repo update
microk8s.helm3 install dex dex/dex -f config.yaml

Wait for dex to deploy, then verify that the CA cert can be used to trust the Dex certificate:

curl --cacert ssl/ca.crt

If this prints some HTML in the terminal, and NOT a warning related to a missing issuer certificate, then you should be good to go.

Configure MicroK8s API server to connect to Dex

  1. Copy the Dex CA ssl/ca.crt in a place where the MicroK8s snap can access it:

    cp ssl/ca.crt /var/snap/microk8s/current/certs/dex-ca.crt
  2. Edit /var/snap/microk8s/current/args/kube-apiserver and append the following lines. Make sure to replace with the IP address of your MicroK8s host.

  3. Restart MicroK8s:

    sudo snap restart microk8s

If you are running a MicroK8s cluster, you should repeat this process for all control plane nodes.

Important notes

  • The client ID that we set in kube-apiserver is the one from the staticClients section of the dex config file. The client secret IS NOT needed in the kube-apiserver.

  • With --oidc-username-claim=name --oidc-username-prefix=oidc:, Dex users will authenticate to the Kubernetes cluster as oidc:$username. This is useful for managing RBAC rules for users.

Generate a kubeconfig file for clients authenticating via OIDC

Next, we will create oidc-kubeconfig, a config file which authenticates to our cluster via Dex:

microk8s config > oidc-kubeconfig

Remove the admin user:

kubectl --kubeconfig=./oidc-kubeconfig config delete-user admin

Now, configure the oidc user (dynamically retrieves OIDC tokens using kubectl oidc-login get-token). You should remove any scopes you do not need.Also:

  • Make sure to replace “” according to your needs…
  • make sure the client id and client secret match the ones in the dex staticClients section.
kubectl --kubeconfig=./oidc-kubeconfig config set-credentials oidc \
    --exec-api-version=client.authentication.k8s.io/v1beta1 \
    --exec-command=kubectl \
    --exec-arg=oidc-login \
    --exec-arg=get-token \
    --exec-arg=--certificate-authority=./dex-ca.crt \
    --exec-arg=--oidc-issuer-url= \
    --exec-arg=--oidc-client-id=kubernetes \
    --exec-arg=--oidc-client-secret=super-safe-client-secret \
    --exec-arg=--oidc-extra-scope=email \

Now use the oidc user by default:

kubectl --kubeconfig=./oidc-kubeconfig config set-context --current --user=oidc

In order to onboard new clients, you will need the oidc-kubeconfig file we just created, as well as the Dex CA file (ssl/ca.crt).

Onboard a new client

In this section, we will configure our local machine to connect to the MicroK8s cluster, using Dex for authentication.

  1. Install kubectl:

    sudo snap install kubectl --classic
  2. Download kubelogin and install in PATH as kubectl-oidc_login. This makes the plugin system of kubectl automatically recognize it and make it available via the kubectl oidc-login command.

    curl -fsSL https://github.com/int128/kubelogin/releases/download/v1.25.0/kubelogin_linux_amd64.zip > kubelogin.zip
    unzip kubelogin.zip
    sudo install -c ./kubelogin /usr/local/bin/kubectl-oidc_login
  3. Retrieve the oidc-kubeconfig you created previously (e.g. using scp), then install in ~/.kube/config. Also, retrieve ssl/ca.crt and rename it to dex-ca.crt:

    mkdir -p ~/.kube
    cp /path/to/oidc-kubeconfig ~/.kube/config
    chmod 0600 ~/.kube/config
    cp /path/to/ca.crt ./dex-ca.crt
  4. Run any kubectl command. kubelogin will open a new browser window. Login via Dex (the username is admin@example.com and password is password from the configuration we uploaded previously).

    kubectl get pod

    After authenticating successfully, you can close the window, and you will get a response:

    NAME                  READY   STATUS    RESTARTS   AGE
    dex-78d687897-v85c9   2/2     Running   0          30m52s

That’s all!

That’s all! It is now possible to access the Kubernetes cluster using kubectl commands as normal. The first time, a browser window will open for the user to login through Dex.

Configure RBAC (optional)

Configuring RBAC for your Kubernetes cluster is useful in cases when multiple people (with multiple roles) need to access the cluster. For example, you may have operators with admin access, multiple developer teams with access limited to a single namespace, or monitoring roles with read-only access to very specific resources. For such scenarios, it is heavily recommended to configure Role-Based Access Control (RBAC) for your cluster.

In a MicroK8s cluster, enable RBAC with the following command (on any control plane node):

microk8s enable rbac

After enabling RBAC, trying to run any kubectl command with OIDC…

kubectl get pod

… which should return an error like the following:

Error from server (Forbidden): pods is forbidden: User "oidc:admin" cannot list resource "pods" in API group "" in the namespace "default"

This is because our user (oidc:admin) has authenticated properly, but is not authorized to perform any actions in the cluster. We can authorize our user by creating a RoleBinding (for a single namespace) or a ClusterRoleBinding (for all namespaces in the cluster). Below you can see some examples. cluster-admin and view are ClusterRoles that exist by default in a Kubernetes cluster. The commands below need to run from the MicroK8s node (the microk8s kubectl command runs as admin in the cluster).
Note: this assumes “–oidc-username-prefix=oidc: --oidc-username-claim=name” in the kube-apiserver args)

microk8s kubectl create clusterrolebinding oidc-admin --user=oidc:admin --clusterrole=cluster-admin

For this example, we will give the oidc:admin user read-only access to the cluster:

First revert previous ClusterRoleBinding:

microk8s kubectl delete clusterrolebinding oidc-admin

… now give view access to the oidc:admin user:

microk8s kubectl create clusterrolebinding oidc-view --user=oidc:admin --clusterrole=view

This can be tested on the local machine:

kubectl auth can-i get pods             # yes
kubectl auth can-i get deployments      # yes
kubectl auth can-i create deployments   # no

Last updated 1 year, 8 months ago. Help improve this document in the forum.