K8S Pod Sidecar Application Scenario - Join NGINX Sidecar as an anti-generation and web server

This article was last updated on: February 7, 2024 pm

Introduction to Kubernetes pod sidecars

Sidecar

A sidecar is a standalone container that runs alongside the application container in the Kubernetes pod and is a helper application.

There are several common accessibility features of sidecars:

  1. Service mesh proxy
  2. Monitoring exporters (e.g. redis exporter)
  3. ConfigMap or/and Secret Reloader (e.g. Prometheus’ Config Reloader)
  4. Auth Proxy (e.g. OAuth Proxies, etc.)
  5. Layer 7 reverse proxy and web server
  6. Log consolidation (audit logs are sent separately to a log channel…)
  7. Demo or AllInOne apps (example apps like nextcloud or Jaeger AllInOne)

In the case of a service mesh, the sidecar is responsible for offloading all the functions required by the service mesh - SSL/mTLS, traffic routing, high availability, etc. - from the application itself, and implementing and deploying various advanced publishing patterns such as circuit breakers, canaries, and blue-green.

As a data plane component, sidecars are typically managed by some type of control plane in a service mesh. When sidecars route application traffic and provide additional data plane services, the control plane injects sidecars into pods when necessary and performs management tasks, such as updating mTLS certificates and pushing them to the appropriate sidecars when needed.

In the log integration scenario, the sidecar is used to summarize and format the log information of multiple application instances into a single file.

Let’s get down to business: NGINX (or Caddy, etc.) is used as a sidecar, mainly for anti-generation and web servers

Web Server Sidecar

Scenario assumptions

Suppose there is such a scenario:

I’m using the native Prometheus AlertManager, and I already have Ingress.
I want to do 2 things now:

  1. Improve the concurrency of the AlertManager UI (add buffer, cache; Enable gzip, etc.)
  2. A js of AlertManager (assuming yes script.jsI made a little change, but instead of intrusively modifying the native AlertManager binary, I put the modified js into nginx’s www directory and let nginx handle it with a different location.

In this scenario, obviously Ingress cannot be satisfied at the same time. At this time, you can add an NGINX sidecar to the AlertManager pod.

The details are as follows

Typical steps for NGINX Sidecar use

  1. Create a configmap for NGINX Conf; (listening on 8080, reverse proxy to backend 9093)
  2. Create a configmap for alertmanager script.js;
  3. Modify the StatefulSets of the original AlertManager to add:
    1. NGINX Sidecar
    2. 3 volumes: 2 of which are used to mount the above ConfigMap, and the other EmptyDir is used to mount the nginx cache
  4. Change the port of the AlertManager Service from 9093 to 8080, name fromhttp Changed to nginx-http
  5. Optionally, modify other parts, such as Ingress, to adjust the port.

NGINX Conf’s ConfigMap

The details are as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmanager-nginx-proxy-config
labels:
app.kubernetes.io/name: alertmanager
data:
nginx.conf: |-
worker_processes auto;
error_log /dev/stdout warn;
pid /var/cache/nginx/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
log_format main '[$time_local - $status] $remote_addr - $remote_user $request ($http_referer)';

proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffering off;
proxy_cache_path /var/cache/nginx/cache levels=1:2 keys_zone=my_zone:100m inactive=1d max_size=10g;

server {
listen 8080;
access_log off;

gzip on;
gzip_min_length 1k;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";

proxy_set_header Host $host;

location = /script.js {
root /usr/share/nginx/html;
expires 90d;
}

location / {
proxy_cache my_zone;
proxy_cache_valid 200 302 1d;
proxy_cache_valid 301 30d;
proxy_cache_valid any 5m;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
add_header Cache-Control "public";

proxy_pass http://localhost:9093/;

if ($request_filename ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$) {
expires 90d;
}
}
}
}

AlertManager script.js ConfigMap

Details omitted.

First through the browserscript.js Download it. Then modify as needed:

1
2
3
4
5
6
7
8
9
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmanager-script-js
labels:
app.kubernetes.io/name: alertmanager
data:
script.js: >-
...

Modify StatefulSets

Some of the changes are as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: monitor-alertmanager
spec:
template:
spec:
volumes:
# 增加 3 个 volumes
- name: nginx-home
emptyDir: {}
- name: html
configMap:
name: alertmanager-script-js
items:
- key: script.js
mode: 438
path: script.js
- name: alertmanager-nginx
configMap:
name: alertmanager-nginx-proxy-config
items:
- key: nginx.conf
mode: 438
path: nginx.conf
containers:
# 增加 NGINX sidecar
- name: alertmanager-proxy
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
image: "nginx:stable"
ports:
- containerPort: 8080
name: nginx-http
protocol: TCP
volumeMounts:
- mountPath: /nginx
name: alertmanager-nginx
- mountPath: /var/cache/nginx
name: nginx-home
- mountPath: /usr/share/nginx/html
name: html
securityContext:
runAsUser: 101
runAsGroup: 101

Modify the service port

As follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: monitor-alertmanager
labels:
app.kubernetes.io/name: alertmanager
spec:
ports:
- name: nginx-http
protocol: TCP
# 修改以下 2 项
port: 8080
targetPort: nginx-http

The final effect

Take AlertManager as an example, before the change:

AlertManager UI - matcher

Revised: (The matcher’s example is more in line with the actual scenario, and several examples have been added.) Minor changes indeed)

AlertManager UI - matcher 修改后

summary

Kubernetes pods were designed to contain multiple containers, leaving endless imagination for the use of pod sidecars in Kubernetes.

Sidecars are generally used for auxiliary functions, such as:

  1. Service mesh proxy
  2. Monitoring exporters (e.g. redis exporter)
  3. ConfigMap or/and Secret Reloader (e.g. Prometheus’ Config Reloader)
  4. Auth Proxy (e.g. OAuth Proxies, etc.)
  5. Layer 7 reverse proxy and web server
  6. Log consolidation (audit logs are sent separately to a log channel…)
  7. Demo or AllInOne apps (example apps like nextcloud or Jaeger AllInOne)

This time we have demonstrated the usefulness of sidecars by adding NGINX as a sidecar for Layer 7 reverse proxy and web server.

🎉🎉🎉

📚️ Reference documentation


K8S Pod Sidecar Application Scenario - Join NGINX Sidecar as an anti-generation and web server
https://e-whisper.com/posts/24103/
Author
east4ming
Posted on
October 8, 2022
Licensed under