Working with Dynamic Jenkins Clusters — Part IV

Source: Google Images

Jenkins is a continuous integration tool written using Java. It is capable of working multiple architectures, based on the use cases. When Jenkins is installed, it typically works in a single server architecture, where all the required job builds occur on one system. However such an architecture will not useful in cases where we need to work with multiple environments to test the job builds, or if the project we handle is very large and our current system is not capable of providing the required resources to automate software deployment using Jenkins. This brings up the requirement of the distributed architecture.

When we install Jenkins, the computer system (node) of installation is automatically configured as the master node. However, it is also possible for the user to configure worker nodes, where all the operations and job builds will occur, while the master node simply serves as a central point of management. Such an arrangement of master-worker nodes is also called a Jenkins cluster. In this arrangement, the master and worker nodes communicate through the TCP/IP protocol.

Figure 1: Example of Distributed Architecture (From Edureka)

Based on the permanence of the worker nodes, we can classify Jenkins clusters into two types:

  1. Static Clusters: Static clusters consist of a master node and multiple worker nodes that are permanently configured and ready to use at any point in time. Using static clusters may result in wastage of resources as it cannot be guaranteed that all the nodes (and their build executors) will be functioning to provide maximum efficiency for the current setup.
  2. Dynamic Clusters: Dynamic clusters consist of a master node (where Jenkins is installed) and worker nodes configured through cloud provisioning. In this setup, the worker nodes are provisioned ad hoc as Docker agents. These nodes are simply Docker containers launched as and when the requirement arises. To use Dynamic clusters, we require the use of the Docker and Yet Another Docker plugins in Jenkins.

To get a glimpse of the working of dynamic clusters in Jenkins, we build a small project, whose objectives are given below.

  1. Create a dynamic Jenkins cluster, whose Docker agent image must be configured with Linux and kubectl (to work with Kubernetes). The cluster should immediately launch a dynamic worker node if a job build begins.
  2. Create a job chain in Jenkins. Pull code from a GitHub repository automatically when code is pushed to the repository. Create a new container image dynamically for the application and copy the application’s code into the image. Push the image to Docker Hub. Then, launch the application on top of a Kubernetes cluster.
  3. If the application is launched for the first time, create a Kubernetes deployment and expose it. Otherwise, perform a rollout of the existing application to implement zero downtime for the user.

We need a few pre-requisites, to begin with, our work, which is as follows:

  1. Minikube and kubectl installed on the base system (Windows or others).
  2. A RHEL8 VM with Docker, Jenkins, and kubectl installed.

Setting up the dynamic cluster

Before setting up the dynamic cluster, we need to install the Docker and Yet Another Docker plugins on Jenkins. In a dynamic cluster, the Jenkins server will be communicating with the Docker service in a given system. For this, we need to allow TCP communication between Jenkins and the Docker service. This is done by modifying the docker.service file at the highlighted line as shown below. The modification essentially allows TCP communication from any IP address at the specified port number (4545). We can assign any port number here.

Figure 2: Modifying the docker.service file

Once the file is modified, we reload the Docker daemon to set the changes

# systecmctl daemon-reload
# systemctl restart docker.service

Next, we create a Dockerfile to create a container image based on CentOS with support to use Kubernetes. We will be using this image as the base template for the dynamic cluster’s Docker Agents. Note that, to use the container image as a template for the dynamic cluster, it must have Java and SSH support installed in it.

#Contents of the Dockerfile for Kubernetes imageFROM centos
RUN yum install sudo java openssh-server git -y
RUN /usr/sbin/sshd -D &
RUN ssh-keygen -A
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mkdir /root/.kube
RUN sudo mv ./kubectl /usr/local/bin/kubectl
COPY ca.crt client.crt client.key /root/
COPY config /root/.kube
EXPOSE 22
EXPOSE 8080
CMD ["/usr/sbin/sshd","-D"] && /bin/bash

In this Dockerfile, we see that there are certain keys (.key) and certificates (.crt) being copied. These files are created automatically when we install Minikube and kubectl on the base system. If we want to set up kubectl and link it to the Minikube cluster in the base system, we will need to set up kubectl with these keys and certificates. Also, the config file being copied is the file that will be used by the kubectl client to communicate with Minikube. The config file is also directly created at the .kube folder in Windows and can be directly copied as well. Its contents are as shown below.

apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://<minikube ip>:8443
certificate-authority: /root/kubectl_certs/ca.crt
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: "minikube"
current-context: "minikube"users:
- user:
client-key: /root/kubectl_certs/client.key
client-certificate: /root/kubectl_certs/client.crt
name: myakshaya

Next, we write the Dockerfile of the container image for the custom application. In our case, we can use the simple example of a web application written in HTML.

#Contents of the Dockerfile for HTML applicationFROM centosRUN yum install sudo vim httpd php git -y
COPY index.html /var/www/html/
EXPOSE 80
CMD /usr/sbin/httpd -DFOREGROUND

Now, we move onto to Jenkins to configure the dynamic cloud. To do this, we go to the Manage Clouds and Nodes section of Manage Jenkins and then go to Configure Clouds. If the Docker plugins have been installed successfully, we can add new a cloud, which can communicate with a Docker service, on any given host.

The name of the cluster is user-defined.

Figure 3: Setting up a dynamic cluster (Part 1)
Figure 4: Setting up a dynamic cluster (Part 2)
Figure 5: Setting up a dynamic cluster (Part 3)

Test the connection to the cloud, and once it is successful, save the settings. Now, the main part of the setup has finished and we only need to work on the Jenkins Jobs themselves.

Jenkins Jobs

In Job1, we will build the container image for Kubernetes and push into Docker Hub.

Figure 6: Job 1 configuration

In Job2, we will be downloading the Dockerfile for the HTML application as well as the HTML application code stored in a GitHub repository and then build the image dynamically. We restrict the job to run only on the Master node where the GitHub files will be downloaded.

We set two triggers for the job build — the successful build of Job1 or any push updates to the GitHub repository. Downloading GitHub repository content requires the use of Webhooks and GitHub plugin, and the procedure to download contents has been explained in detail here.

Figure 7: Job2 configuration (Part 1)
Figure 8: Job2 configuration (Part 2)
Figure 9: Job2 configuration (Part 3)
Figure 10: Job2 configuration (Part 4)

In Job3, we will be handling the deployment of the application using the custom docker image created. In this job, we utilize the dynamic cluster we have created by restricting the job build to the Docker Agent of the dynamic cluster (named docker). We also make it a downstream project to Job2.

Figure 11: Job3 configuration (Part 1)
Figure 12: Job3 configuration (Part 2)
Figure 13: Job3 configuration (Part 3)

We need to observe the console output of Job3 as the exposed port of the deployment will be seen there.

Figure 14: Noting the exposed port of the deployment

Finally, our output is as shown below.

Figure 15: Website Output

This article is written as a part of the DevOps Assembly Lines Training Program conducted by Mr. Vimal Daga from LinuxWorld Informatics Pvt., Ltd.

You can find some of my previous works for the DevOps Assembly Lines Program below.

  1. Working with Jenkins — An Introduction: https://medium.com/@akshayavb99/working-with-jenkins-an-introduction-48ecf3de3c25
  2. Working with Jenkins, Docker, Git, and GitHub — Part II: https://medium.com/@akshayavb99/working-with-jenkins-docker-git-and-github-part-ii-d74b6e47140c
  3. Working with Jenkins, Docker, Git, and GitHub — Part III: https://medium.com/@akshayavb99/working-with-jenkins-docker-github-and-kubernetes-part-iii-72deae79bf2e

ECE Undergrad | ML, AI and Data Science Enthusiast | Avid Reader | Keen to explore different domains in Computer Science