Grafana series (XIII): How to view Kubernetes Events with Loki collections

This article was last updated on: July 24, 2024 am

Synopsis

  1. IoT edge clusters are implemented based on alarm notifications from Kubernetes Events
  2. IoT edge cluster Alarm notification implementation based on Kubernetes Events (2): Further configuration

overview

Kubernetes Events is super useful when analyzing K8S cluster issues.

Kubernetes Events can be treated as logs in a similar format to logs, including:

  1. Time
  2. subassembly
  3. cause

However, Kubernetes only persists events for one hour by default to reduce the load on etcd. So, consider using Loki to store and query these Events.

implement

Looked My previous article You can know,kubernetes-event-exporter You can implement the collection of Kubernetes Events.

So let’s use kubernetes-event-exporter, through the easiest stdout way to output event.

Also, reuse Promtail’s Pipeline configurationto add NameSpace as an additional tag to the logs exported to Loki.

kubernetes-event-exporter configuration

As follows:

1
2
3
4
5
6
7
8
9
10
logLevel: error
logFormat: json
trottlePeriod: 5
route:
routes:
- match:
- receiver: "dump"
receivers:
- name: "dump"
stdout: { }

Promtail configuration

As follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
...
scrape_configs:
- job_name: kubernetes-pods-app
pipeline_stages:
- cri: {}
- match:
selector: '{app="event-exporter"}'
stages:
- json:
expressions:
namespace: involvedObject.namespace
- labels:
namespace: ""
...

The above configuration will be taken from Events’ JSONPath involvedObject.namespace Get NameSpace and use it as a label - namespace Add to.

At this point, I can only view specific NameSpaces (such asemqxEvents, as shown below:

来自 emqx NameSpace 的 Events

🎉🎉🎉

📝Notes:

It’s mineevent-exporter Yes deployed in monitoring NS

❓️ Troubleshooting

When I first started doing it, I found that the log output was incorrect, and the format example is as follows:

错误的日志格式

1
2022-04-20T22:26:19.526448119+08:00 stderr F I0420 {...json...}

This is because the container runtime I use is CRI, not Docker.

However, when Loki is installed by default, Docker’s stage parser is used in the configuration file, resulting in an abnormal log format. The initial configuration is as follows:

1
2
3
4
5
...
- job_name: kubernetes-pods-name
pipeline_stages:
- docker: {}
...

Docker’s log format is as follows:

1
`{"log":"level=info ts=2019-04-30T02:12:41.844179Z caller=filetargetmanager.go:180 msg=\"Adding target\"\n","stream":"stderr","time":"2019-04-30T02:12:41.8443515Z"}`

The log format for CRI is as follows:

1
2019-01-01T01:00:00.000000001Z stderr P some log message

So as shown above, choose the appropriate stage parser according to your container runtime.

For CRI, cri: {} In fact, it is a “grammatical sugar” with the following details:

1
2
3
4
5
6
7
8
9
- regex:
expression: "^(?s)(?P<time>\\S+?) (?P<stream>stdout|stderr) (?P<flags>\\S+?) (?P<content>.*)$"
- labels:
stream:
- timestamp:
source: time
format: RFC3339Nano
- output:
source: content

📚️ Reference documentation

Grafana series of articles

Grafana series of articles


Grafana series (XIII): How to view Kubernetes Events with Loki collections
https://e-whisper.com/posts/23047/
Author
east4ming
Posted on
April 20, 2022
Licensed under