CEYARK – GenAI Powered Company.

Chennai, India
+919042179452
business@ceyark.com
Image Alt

CEYARK Technologies

Cassandra Pods in Kubernetes cluster

In this post, we will describe the steps we followed to enable cassabdra pods within kubernetes cluster using microk8s in Ubuntu 18.04.

First step was to enable firewalls in ubuntu using ufw if not done already. Lets start with Install microk8s.

       $ sudo snap install microk8s –classic
       $ sudo usermod -a -G microk8s $USER
       $ su – $USER

Now we configure firewall to allow pod-to-pod and pod-to-internet communication

       $ sudo ufw allow in on cni0 && sudo ufw allow out on cni0
       $ sudo ufw default allow routed

Lets enable addons for microk8s    

       $ microk8s.enable dashboard dns

Now start the cluster.

       $ microk8s.start
       $ microk8s.status

If error message is not observed, then the cluster is started successfully and we can proceed to access the dashboard.
The command will give us the cluster IP address for the dashboard service.

 
      $ sudo microk8s.kubectl get services -n kube-system

Typically the name of the services will be service/kubernetes-dashboard on port 443.

Dashboard require us to login using a token and to get the token, use the two commands below.

$token=$(microk8s.kubectl -n kube-system get secret | grep default-token | cut -d ” ” -f1)
$microk8s.kubectl -n kube-system describe secret $token

On successful login to dashboard, we can start to create the services.

First we define a storageclass. If you already have a storageclass, you can use that as well. Our storageclass definition is given below.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceyarkvolume1
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate

We created the storageclass through the command below.

          $ microk8s.kubectl apply -f ceyarkvolume1-sc.yml

If the above command executes without any error, you can check the created storage class through

         $ sudo microk8s.kubectl get sc

Second step is to create a PersistentVolumeClaim. Our claim definition is given below.


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name:  ceyark-volclaim1
spec:
  accessModes:
    – ReadWriteOnce
  storageClassName: ceyarkvolume1
  resources:
    requests:
      storage: 500M

The volume claim is created through the command below.

       $ microk8s.kubectl apply -f persistantvolume-sc.yml

If the above command executes without any error, you can check the created persistent volume through
       $ sudo microk8s.kubectl get persistentVolumes

Third step is to create a cassandra StatefulSet. Our definition is given below.


apiVersion: “apps/v1”
kind: StatefulSet
metadata:
  name: cassandra
spec:
  serviceName: cassandra
  replicas: 1
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
 volumes:
– name: cassandra-data
 persistentVolumeClaim:
            claimName: ceyark-volclaim1
      containers:
        – name: cassandra
          image: cassandra:3
          imagePullPolicy: IfNotPresent
          ports:
            – containerPort: 7000
              name: intra-node
            – containerPort: 7001
              name: tls-intra-node
            – containerPort: 7199
              name: jmx
            – containerPort: 9042
              name: cql
          env:
            – name: CASSANDRA_SEEDS
              value: cassandra-0.cassandra.default.svc.cluster.local
            – name: MAX_HEAP_SIZE
              value: 256M
            – name: HEAP_NEWSIZE
              value: 100M
            – name: CASSANDRA_CLUSTER_NAME
              value: “Cassandra”
            – name: CASSANDRA_DC
              value: “DC1”
            – name: CASSANDRA_RACK
              value: “Rack1”
            – name: CASSANDRA_ENDPOINT_SNITCH
              value: GossipingPropertyFileSnitch
          volumeMounts:
            – name: cassandra-data
              mountPath: /var/lib/cassandra/data

The statefulSet is created through the command below.

     $ microk8s.kubectl apply -f cassandrastatefulset.yml

You can check the status of the cassandra pods through the command as shown below.

      $ microk8s.kubectl get pods –output=wide
      $ microk8s.kubectl exec -ti cassandra-0 — nodetool status

But you may be inclined to test it before you start to use it. Execute the below commands to check if you are able to access the DB.

      $ microk8s.kubectl exec -ti cassandra-0 cqlsh

On successful login:
cqlsh> describe tables

If you want to delete the pods and the claims, you can execute the below commands in sequence.

     $ microk8s.kubectl delete service -l app=cassandra
     $ microk8s.kubectl delete -f persistantvolume-sc.yml

To ensure the storage is reclaimed, we can execute the below commands to check.
    $ sudo microk8s.kubectl get persistentVolumes

With the configuration files ready, we were able to create the cassandra docker pods within the clusters very quickly. We tried different options including scaling the pods size, taking backup of the data in the database etc., The possibilities are exciting.