This page describes how to configure MicroK8s to ship logs and metric data to an external Observability stack for logging, monitoring and alerting. The Observability stack used in this example consists of:
What’s covered in this documentation:
- What metrics endpoints are available
- Any configuration needed to access metrics endpoints
- The role of the metrics server
- Location of logs for log forwarding
What isn’t covered:
- How to setup and configure Grafana, Prometheus, Alertmanager or any other mentioned tools - please refer to the documentation for these products
- Recommendation of which logging exporter to use
Low level building blocks for monitoring & logging
Metrics
Kubernetes cluster metrics are available through designated API endpoints from each respective service. Therefore it is important to be aware of the hosts each service runs on.
- kube-apiserver runs on all hosts but serves the same content on any host
- kube-proxy runs on every host
- kubelet runs on every host
- kube-scheduler runs on the three hosts (at most) which are dqlite voters
- kube-controller-manager runs on the three hosts (at most) which are dqlite voters
Metrics endpoints:
- On kube-controller-manager, kube-proxy, kube-apiserver, kube-scheduler:
/metrics
- On kubelet:
/metrics
/metrics/cadvisor
/metrics/resource
/metrics/probes
Access to the metrics endpoints is tuned with the service startup arguments. Configuration arguments are added to the files in /var/snap/microk8s/current/args/
.
After updating the configuration file, MicroK8s should be restarted with:
microk8s.stop
microk8s.start
kube-apiserver
(edit the file /var/snap/microk8s/current/args/kube-apiserver
)
option | type | default |
---|---|---|
--show-hidden-metrics-for-version |
string | |
--bind-address |
ip | 0.0.0.0 |
--secure-port |
int | 16443 |
kube-controller-manager
(edit the file /var/snap/microk8s/current/args/kube-controller-manager
)
option | type | default |
---|---|---|
--show-hidden-metrics-for-version |
string | |
--secure-port |
int | 10257 |
--bind-address |
ip | 0.0.0.0 |
kube-proxy
(edit the file /var/snap/microk8s/current/args/kube-proxy
)
option | type | default |
---|---|---|
--show-hidden-metrics-for-version |
string | |
--metrics-bind-address |
ip:port | 127.0.0.1:10249 |
kube-scheduler
(edit the file /var/snap/microk8s/current/args/kube-scheduler
)
option | type | default |
---|---|---|
--show-hidden-metrics-for-version |
string | |
--bind-address |
ip | 0.0.0.0 |
--port |
int | 10251 |
kubelet
(edit the file /var/snap/microk8s/current/args/kubelet
)
option | type | default |
---|---|---|
--address |
ip | 0.0.0.0 |
--port |
int | 10250 |
The --show-hidden-metrics-for-version
argument allows you to indicate the previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>
, e.g.: ‘1.16’. The purpose of this format is to make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
The API endpoints above are subject to RBAC. Make sure you configure RBAC according to your needs (see the example in the Kubernetes docs).
Logs
Pod logs to be imported to elasticsearch are found in /var/log/containers/
in links to files in /var/log/pods/
on all nodes as all MicroK8s nodes run kubelet.
To gather logs for the Kubernetes services you should be aware that all services are systemd services:
snap.microk8s.daemon-cluster-agent
snap.microk8s.daemon-containerd
snap.microk8s.daemon-apiserver
snap.microk8s.daemon-apiserver-kicker
snap.microk8s.daemon-proxy
snap.microk8s.daemon-kubelet
snap.microk8s.daemon-scheduler
snap.microk8s.daemon-controller-manager
High level tools for monitoring, logging and alerting
Metrics Server
Metrics server collects resource metrics from kubelets and exposes them in Kubernetes apiserver through the Metrics API. Metrics server is not meant for non-autoscaling purposes.
To get the metrics server running in MicroK8s, run the following:
microk8s enable metrics-server
Visit the metric project’s docs for alternative installation methods.
The focus of the metrics server is on CPU and memory as these metrics are used by the Horizontal and Vertical Pod Autoscalers. As a user you can view the metrics gathered with the microk8s kubectl top
command.
The metrics server will not give you accurate readings of resource usage metrics.
Prometheus
Prometheus collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specific conditions are observed.
Prometheus gathers metrics from the Kubernetes endpoints discussed in the previous sections. Prometheus is closely associated with Alertmanager.
Describing the deployment steps of Prometheus is outside the scope of this document. However, you should be aware of which of the few deployment layouts is at hand. The use case we expect to have is a number of MicroK8s clusters all sending metrics to a central Prometheus installation. A few ways to achieve this layout are:
- Scrape remote k8s clusters: Run the prometheus node-exporter and the prometheus adapter for kubernetes metrics APIs (or any other exporter) to gather information from each MicroK8s cluster to a central Prometheus installation.
- Remote Prometheus as Grafana data sources: Run the entire Prometheus on each cluster and have a central Grafana that would view each Prometheus as a different data source. In this case the Prometheus service needs to be exposed and be reachable outside the K8s cluster.
- Federation: With federation you can consolidate selected metrics from multiple k8s clusters.
For the setup 2 and 3, where Prometheus needs to be installed on each cluster, you can make use of the Prometheus addon by running the command:
microk8s enable prometheus
Alternatively, visit the Prometheus documentation to select an alternative installation method.
Based on the metrics gathered you may want to import respective Grafana dashboards. You can find some predefined dashboards online.
Alertmanager
The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie.
For installation, please refer to the Alertmanager documentation.
A wide range of pre-defined alert rules are available online, for example, in the Prometheus community GitHub repo.