Install Kafka using Bitnami Helm

This article was last updated on: July 24, 2024 am

Deploy Kafka Server on server-side K3S

Kafka installation

📚️ Quote:

charts/bitnami/kafka at master · bitnami/charts (github.com)

Enter the following command to add a Helm repository:

1
2
3
4
> helm repo add tkemarket https://market-tke.tencentcloudcr.com/chartrepo/opensource-stable
"tkemarket" has been added to your repositories
> helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

🔥 Tip:

The tkemarket image is not updated in time, it is recommended to use the bitnami repository.

But Bitmani is overseas, and there is a risk that it will not be connected.

Find Helm Chart kafka:

1
2
3
4
5
6
> helm search repo kafka
NAME CHART VERSION APP VERSION DESCRIPTION
tkemarket/kafka 11.0.0 2.5.0 Apache Kafka is a distributed streaming platform.
bitnami/kafka 15.3.0 3.1.0 Apache Kafka is a distributed streaming platfor...
bitnami/dataplatform-bp1 9.0.8 1.0.1 OCTO Data platform Kafka-Spark-Solr Helm Chart
bitnami/dataplatform-bp2 10.0.8 1.0.1 OCTO Data platform Kafka-Spark-Elasticsearch He...

Install Kafka using BitNami’s helm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
helm install kafka bitnami/kafka \
--namespace kafka --create-namespace \
--set global.storageClass=<storageClass-name> \
--set kubeVersion=<theKubeVersion> \
--set image.tag=3.1.0-debian-10-r22 \
--set replicaCount=3 \
--set service.type=ClusterIP \
--set externalAccess.enabled=true \
--set externalAccess.service.type=LoadBalancer \
--set externalAccess.service.ports.external=9094 \
--set externalAccess.autoDiscovery.enabled=true \
--set serviceAccount.create=true \
--set rbac.create=true \
--set persistence.enabled=true \
--set logPersistence.enabled=true \
--set metrics.kafka.enabled=false \
--set zookeeper.enabled=true \
--set zookeeper.persistence.enabled=true \
--wait

🔥 Tip:

The parameters are described as follows:

  1. --namespace kafka --create-namespace: Installed in kafka namespace, created if there is no ns;
  2. global.storageClass=<storageClass-name> Uses the specified storageclass
  3. kubeVersion=<theKubeVersion> Let bitnami/kafka helm determine whether the version requirements are met, and if it is not met, it cannot be created
  4. image.tag=3.1.0-debian-10-r22: 20220219 latest image, using full name guarantees to minimize pulling images from the Internet;
  5. replicaCount=3: The number of kafka copies is 3
  6. service.type=ClusterIP :create kafka service, used inside the k8s cluster, so ClusterIP will do
  7. --set externalAccess.enabled=true --set externalAccess.service.type=LoadBalancer --set externalAccess.service.ports.external=9094 --set externalAccess.autoDiscovery.enabled=true --set serviceAccount.create=true --set rbac.create=true Create one for access outside the k8s cluster kafka-<0|1|2>-external Service (because the previous kafka replica count is 3)
  8. persistence.enabled=true: Kafka data persistent, the directory in the container is /bitnami/kafka
  9. logPersistence.enabled=true: Kafka logs persisted, the directory in the container is /opt/bitnami/kafka/logs
  10. metrics.kafka.enabled=false Do not enable Kafka monitoring (Kafka monitoring collects data through.) kafka-exporter implemented)
  11. zookeeper.enabled=true: To install Kafka, you need to install Zookeeper first
  12. zookeeper.persistence.enabled=true: Zookeeper logs persisted, the directory in the container is:/bitnami/zookeeper
  13. --waitThe : helm command waits for the result of creation

The output is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
creating 1 resource(s)
creating 12 resource(s)
beginning wait for 12 resources with timeout of 5m0s
Service does not have load balancer ingress IP address: kafka/kafka-0-external
...
StatefulSet is not ready: kafka/kafka-zookeeper. 0 out of 1 expected pods are ready
...
StatefulSet is not ready: kafka/kafka. 0 out of 1 expected pods are ready
NAME: kafka
LAST DEPLOYED: Sat Feb 19 05:04:53 2022
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 15.3.0
APP VERSION: 3.1.0
---------------------------------------------------------------------------------------------
WARNING

By specifying "serviceType=LoadBalancer" and not configuring the authentication
you have most likely exposed the Kafka service externally without any
authentication mechanism.

For security reasons, we strongly suggest that you switch to "ClusterIP" or
"NodePort". As alternative, you can also configure the Kafka authentication.

---------------------------------------------------------------------------------------------

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

kafka.kafka.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

kafka-0.kafka-headless.kafka.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.1.0-debian-10-r22 --namespace kafka --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace kafka -- bash

PRODUCER:
kafka-console-producer.sh \

--broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092 \
--topic test

CONSUMER:
kafka-console-consumer.sh \

--bootstrap-server kafka.kafka.svc.cluster.local:9092 \
--topic test \
--from-beginning

To connect to your Kafka server from outside the cluster, follow the instructions below:

NOTE: It may take a few minutes for the LoadBalancer IPs to be available.
Watch the status with: 'kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -w'

Kafka Brokers domain: You will have a different external IP for each Kafka broker. You can get the list of external IPs using the command below:

echo "$(kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"

Kafka Brokers port: 9094

Kafka test validation

Test message:

First create a kafka-client pod with the following command:

1
kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.1.0-debian-10-r22 --namespace kafka --command -- sleep infinity

Then go to kafka-client and run the following command test:

1
2
3
4
5
kafka-console-producer.sh  --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092  --topic test
kafka-console-consumer.sh --bootstrap-server kafka-0.kafka-headless.kafka.svc.cluster.local:9092 --topic test --from-beginning

kafka-console-producer.sh --broker-list 10.109.205.245:9094 --topic test
kafka-console-consumer.sh --bootstrap-server 10.109.205.245:9094 --topic test --from-beginning

The effect is as follows:

外部 producer

external consumer

🎉 At this point, the kafka installation is complete.

Kafka uninstalls

Danger

Command to delete the entire kafka (as needed):

1
helm delete kafka --namespace kafka

summary

Kafka

  1. Kafka is installed via Helm Chart bitnami on the K8S cluster kafka namespace;

    1. Installation mode: three nodes
    2. Kafka version: 3.1.0
    3. Kafka instances: 3
    4. Zookeeper instance: 1
    5. Kafka, Zookeeper, and Kafka logs are persisted at:/data/rancher/k3s/storage
    6. SASL and TLS are not configured
  2. Inside the K8S cluster, Kafka can be accessed from this address:

    kafka.kafka.svc.cluster.local:9092

  3. Outside the K8S cluster, Kafka can be accessed from this address:

    <loadbalancer-ip>:9094

A screenshot of Kafka’s persistent data is as follows:

image-20220219133316787