Prometheus Operator vs. kube-prometheus bis - how to monitor 1.23+ kubeadm clusters

This article was last updated on: February 7, 2024 pm

Brief introduction

kube-prometheus-stackBundled with Prometheus Operator, Exporter, Rule, Grafana, and AlertManager needed to monitor Kubernetes clusters.

But for usekubeadmThe built Kubernetes cluster is still necessary to customize the Helm installation.

This time combined with the recent newer Kubernetes version v1.23+, and the more common installation methods kubeadm, to the actual combat description:

  • What special configurations are required for kubeadm
  • How to install Prometheus Operator: Pass kube-prometheus-stack helm chart
  • How to configure component monitoring for a cluster where kubeadm is installed

Begin!

Prerequisites

  • kubeadm
  • helm3

What special configurations are required for kubeadm

In order to obtain the metrics of the Kubernetes v1.23+ cluster built by kubeadm normally through Prometheus Operator later, you need to make some special configurations for kubeadm.

By default, Kubeadm binds several of its management components to Node localhost 127.0.0.1 Addresses, involved: Kube Controller Manager, Kube Proxy and Kube Scheduler.

However, for monitoring, we need exposure of these endpoints so that their metrics can be extracted by Prometheus. Therefore, we need to expose these components to their 0.0.0.0 address.

When logging in to the kubeadm master node, run the following modifications:

Controller Manager and Scheduler components

By default,kubeadm There are no disclosures of the two services we want to monitor (kube-controller-manager and kube-scheduler). So, to make the most of itkube-prometheus-stack Helm chart, we need to make some quick adjustments to the Kubernetes cluster. Later we will monitor kube-controller-manager and kube-scheduler, and we must expose their address ports to the cluster.

By default, kubeadm runs these pods on your host and binds to 127.0.0.1. There are several ways to change this. It is recommended to change these configurations using kubeadm config file。 The following is an example configuration:

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
...
controllerManager:
extraArgs:
bind-address: "0.0.0.0"
scheduler:
extraArgs:
bind-address: "0.0.0.0"
...
kubernetesVersion: "v1.23.1"
...

🐾Upper. .scheduler.extraArgs and .controllerManager.extraArgs。 So put kube-controller-manager and kube-scheduler The service is exposed to other components of the cluster.

Also, if you put Kubernetes core components as pods in the kube-system namespace, make sure to do sokube-prometheus-exporter-kube-scheduler and kube-prometheus-exporter-kube-controller-manager service (these 2 services were created by kube-prometheus-stack for Prometheus Operator to monitor these two components through ServiceMonitor).spec.selector The value is consistent with the value of the pods.

If you already have a Kubernetes with kubeadm deployed, you can directly listen to kube-controller-manager and kube-scheduler:

1
2
sed -e "s/- --bind-address=127.0.0.1/- --bind-address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-controller-manager.yaml
sed -e "s/- --bind-address=127.0.0.1/- --bind-address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-scheduler.yaml

Kube Proxy component

📝Notes:
In general, kube-proxy is always bound to all addresses, but corresponding metricsBindAddress It may not necessarily follow the configuration. Specifically, as shown in “Before Change” below

For the Kube Proxy component, after the installation using kubeadm is completed, you need to modify the configmap kube-proxy under kube-system metricsBindAddress.

The changes are as follows:

Before the change:

1
2
3
4
5
...
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
metricsBindAddress: 127.0.0.1:10249
...

After the change:

1
2
3
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249

And reboot:

1
kubectl -n kube-system rollout restart daemonset/kube-proxy

etcd configuration

etcd configuration, which will not be detailed here, can be directly read: Prometheus Operator monitors etcd clusters - Yang Ming’s blog

But the method mentioned in the above link is more troublesome, recommend a simpler one: you can add a flag to listen to the Metrics URL in the configuration of etcd:

1
2
3
# 在 etcd 所在的机器上 
master_ip=192.168.1.5
sed -i "s#--listen-metrics-urls=.*#--listen-metrics-urls=http://127.0.0.1:2381,http://$master_ip:2381#" /etc/kubernetes/manifests/etcd.yaml

Verify the kubeadm configuration

To summarize, through these previous configurations, the Metrics listening ports of the Kubernetes component are:

  • Controller Manager: (Kubernetes v1.23+)
    • Port: 10257
    • Protocol: https
  • Scheduler: (Kubernetes v1.23+)
    • Port: 10259
    • Protocol: https
  • Kube Proxy
    • Port: 10249
    • Protocol: http
  • etcd
    • Port: 2381
    • Protocol: http

Can pass netstat Command to check whether all previous configurations take effect:

Execute on master and etcd node:

1
2
3
4
5
6
7
8
9
10
11
12
$ sudo netstat -tulnp | grep -e 10257 -e 10259 -e 10249 -e 2381
tcp 0 0 192.168.1.5:2381 0.0.0.0:* LISTEN 1400/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 1400/etcd
tcp6 0 0 :::10257 :::* LISTEN 1434/kube-controlle
tcp6 0 0 :::10259 :::* LISTEN 1486/kube-scheduler
tcp6 0 0 :::10249 :::* LISTEN 4377/kube-proxy

# 测试 etcd 指标
curl -k http://localhost:2381/metrics

# 测试 kube-proxy 指标
curl -k http://localhost:10249/metrics

Install and customize Helm Values via kube-prometheus-stack

Here’s the 2 steps mentioned above directly:

Before we can install kube-prometheus-stack with Helm, we need to create onevalues.yamlto adjust the default chart value of the kubeadm cluster.

Configure persistent storage for Prometheus and AlertManager

It is recommended to configure persistent storage for Prometheus and AlertManager instead of using emptyDir.
The specific configuration of storage depends on the actual situation of your cluster, so I will not introduce it too much here.

The port monitored by Kubeadm etcd is 2381 (not specified in the Helm chart.)Default value: 2379)], so we need to explicitly override this value.

1
2
3
4
5
6
kubeEtcd:
enabled: true
service:
enabled: true
port: 2381
targetPort: 2381

There is not much configuration to be done here, regarding https and ports, if the relevant key is empty or not set, the value will be dynamically determined based on the target Kubernetes version, due to the change in the default port in Kubernetes 1.22. Note the following: .kubeControllerManager.service.port and .kubeControllerManager.service.targetPort and .kubeControllerManager.serviceMonitor.https and .kubeControllerManager.serviceMonitor.insecureSkipVerify.

If the monitoring cannot be caught or there is an abnormality after configuration, it can be adjusted according to the actual situation.

1
2
3
4
5
6
7
8
9
10
11
12
13
kubeControllerManager:
enabled: true
...
service:
enabled: true
port: null
targetPort: null
serviceMonitor:
enabled: true
...
https: null
insecureSkipVerify: null
...

Kubernetes Scheduler

Ditto, there is not much configuration to be done here, regarding https and ports, if the relevant key is empty or not set, the value will be dynamically determined based on the target Kubernetes version, due to the change of the default port in Kubernetes 1.23. Note the following: .kubeScheduler.service.port and .kubeScheduler.service.targetPort and .kubeScheduler.serviceMonitor.https and .kubeScheduler.serviceMonitor.insecureSkipVerify.

If the monitoring cannot be caught or there is an abnormality after configuration, it can be adjusted according to the actual situation.

1
2
3
4
5
6
7
8
9
10
11
12
13
kubeScheduler:
enabled: true
...
service:
enabled: true
port: 10259
targetPort: 10259
serviceMonitor:
enabled: true
...
https: true
insecureSkipVerify: true
...

Kubernetes Proxy

The same is true, adjusted according to whether https and port, as follows:

1
2
3
4
5
6
7
8
9
10
11
12
kubeProxy:
enabled: true
endpoints: []
service:
enabled: true
port: 10249
targetPort: 10249
serviceMonitor:
enabled: true
...
https: false
...

Install kube-prometheus-stack via Helm

To add a Helm repository:

1
2
3
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo list
helm repo update prometheus-community

Installation:

1
2
3
4
5
helm upgrade --install \
--namespace prom \
--create-namespace \
-f values.yaml \
monitor prometheus-community/kube-prometheus-stack

verify

Here it is mainly verified whether the Kubernetes components of kubeadm are monitored normally, which can be verified by direct viewing through Prometheus UI or Grafana UI.

The Prometheus UI or Grafana UI address can be exposed via Ingress or NodePort, and then accessed:

Status -> Targets To view the monitoring status, here are a few components to illustrate:

Controller Manager 监控状态

Kube Proxy 监控状态

Kube Scheduler 监控状态

Grafana can directly log in to view the corresponding dashboard, as shown below:

etcd Grafana Dashboard

🎉🎉🎉

📚️ Reference documentation


Prometheus Operator vs. kube-prometheus bis - how to monitor 1.23+ kubeadm clusters
https://e-whisper.com/posts/3988/
Author
east4ming
Posted on
August 24, 2022
Licensed under