Kafka Broker Pods in Kubernetes cluster
In this post, we will describe the steps we followed to enable Kafka broker pods within kubernetes cluster using microk8s in Ubuntu 18.04.
First step was to enable firewalls in ubuntu using ufw if not done already. Lets start with Install microk8s.
$ sudo snap install microk8s –classic
$ sudo usermod -a -G microk8s $USER
$ su – $USER
configure firewall to allow pod-to-pod and pod-to-internet communication
$ sudo ufw allow in on cni0 && sudo ufw allow out on cni0
$ sudo ufw default allow routed
enable addons for microk8s
$ microk8s.enable dashboard dns
Enable loadbalancer addon. We require to assign a pool of IP address to assign for new loadbalancer services created within the cluster. We configured a range of 20 IP addresses.
$ microk8s.enable metallb
start the cluster.
$ microk8s.start
$ microk8s.status
If you havent received any error message, then the cluster is started successfully and we can proceed to access the dashboard.
$ microk8s.kubectl get all –all-namespaces
the above command will give us the cluster ip address to access the dashboard. The name of the services will be service/kubernetes-dashboard. The port will be 443 unless you change them through configuration.
When you access the dashboard in a browser, we require to login. We use the token approach to login and we obtain token through the two commands below executed in sequence.
$token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d ” ” -f1)
$microk8s.kubectl -n kube-system describe secret $token
On successful login to dashboard, we can start to create the services.
To create Zookeeper service, click the + sign on the dashboard to provide the configuration for the service.
The configuration we provided is given below.
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper-deployment-1
spec:
selector:
matchLabels:
app: zookeeper-1
template:
metadata:
labels:
app: zookeeper-1
spec:
containers:
– name: zoo1
image: bitnami/zookeeper
ports:
– containerPort: 2181
env:
– name: ZOOKEEPER_ID
value: “1”
– name: ZOOKEEPER_SERVER_1
value: zoo1
– name: ALLOW_ANONYMOUS_LOGIN
value: “yes”
—
apiVersion: v1
kind: Service
metadata:
name: zoo1
labels:
app: zookeeper-1
spec:
ports:
– name: client
port: 2181
protocol: TCP
– name: follower
port: 2888
protocol: TCP
– name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-1
Once we click the upload button, unless you have any format error reported, we should be able to see a successful zookeeper service running in the dashboard overview screen. Format errors are generally alignment issues and are simple to fix through the guidance provided by the error message.
To create kafka pods, click the + sign on the dashboard to provide the configuration for the service.
The configuration we provided is given below.
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-service
spec:
selector:
matchLabels:
app: kafka-service
template:
metadata:
labels:
app: kafka-service
spec:
replicas: 1
containers:
– name: kafka-service
image: wurstmeister/kafka
ports:
– containerPort: 9092
env:
– name: KAFKA_LISTENERS
value: INTERNAL://:32323,EXTERNAL://:9092
– name: KAFKA_ADVERTISED_LISTENERS
value: INTERNAL://kafka-service:32323,EXTERNAL://10.152.183.230:9092
– name: KAFKA_ZOOKEEPER_CONNECT
value: 10.152.183.182:2181
– name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
– name: KAFKA_CREATE_TOPICS
value: ceyark-test:1:1
– name: KAFKA_INTER_BROKER_LISTENER_NAME
value: INTERNAL
Once we click the upload button, unless you have any format error reported, we should be able to see a successful kafka pods running in the dashboard overview screen. Alternatively, you can view the information through the below command from console window.
$microk8s.kubectl get pods –output=wide
Format errors are generally alignment issues and are simple to fix through the guidance provided by the error message. If you have any errors, you may check the
logs in the dashboard through the logs options against the pod. The logs are generally good enough to troubleshoot and fix the issues.
Finally we require to configure a loadbalancer service for our pods to route the request to the pods that originate from outside the cluster. Execute the following command to create, link the loadbalancer service to our pods.
$ microk8s.kubectl expose deployment kafka-service –type=LoadBalancer –name=my-service
You can view the status of the new loadbalancer service in the dashboard. Alternatively enter the below command to view the details of the service.
$ sudo microk8s.kubectl get services
$ sudo microk8s.kubectl describe svc my-service
Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kafka-service
Type: LoadBalancer
IP: 10.182.183.167
LoadBalancer Ingress: 10.182.183.230
Port: <unset> 9092/TCP
TargetPort: 9092/TCP
NodePort: <unset> 31985/TCP
Endpoints: 10.1.2.122:9092
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
—- —— —- —- ——-
Normal IPAllocated 5s metallb-controller Assigned IP “10.182.183.230”
Normal nodeAssigned 5s metallb-speaker announcing from node “ceyark-macserv1“
If you happen to see all the configurations are correct in the loadbalancer service as shown above and the dashboard does not report of any failures across the three items we started, we are good to use the kafka brokers deployed in our pods. Notice the endpoints value above. It is the IP address assigned to the pods. Every time we restart the pods, the IP address will get changed. This may not be liked by applications that require IP address to be almost static and in such scenarios, we will be having the benefit of the loadbalancer service.
But you may be inclined to test it before you start to use it. We used a test utility called kafkacat. install it like below.
$ apt-get install kafkacat
Once it is installed, we start the consumer first in a seperate console.
$kafkacat -b 10.182.183.230:9092 -t ceyark-test
we start the producer in seperate console.
$ echo hello | kafkacat -b 10.182.183.230 -t ceyark-test
If everything goes fine, we should see the “hello” message in the consumer console. Once you are done, issue the following command to stop the cluster.
$ microk8s.stop
With the configuration files in hand ready, we were able to create the kafka docker clusters in less than
10 min and this utility helps with our development tasks.