Automation with Ansible — Common Terms and Setting up Docker

This is the second article in the Automation with Ansible series. For the first article, please refer to this link.

In this series, we will be looking at different ways in which Ansible can be used to implement automation in the IT industry

Photo by Glenn Carstens-Peters on Unsplash

How do we begin working with Ansible?

Ansible enables us to configure and manage the software installation, updates, and maintenance in multiple systems (target nodes) simultaneously from a single computer called the controller node. We can work with ansible either using the command line (ad-hoc commands) or by using the Ansible Playbooks. In the command line, we use the ansible command with a variety of in-built packages called modules to perform different tasks.

Ansible allows us to use a single framework to interact with different Operating Systems. To understand this better, let’s take the scenario where we try to install a webserver on an OS. Every OS has its installation procedure for this. For example, in RHEL8 we can configure the YUM repository and then use the yum command to install the Apache webserver. In Ubuntu, we use the apt-get command for the same purpose. Ansible provides a layer of abstraction so that we don’t need to remember the individual commands for installing the webserver on each OS. Instead, we can use the in-built modules to declare the tasks we need to perform, and Ansible itself will identify which OS a target node is using and perform the necessary steps for the installation.

Now, let’s look at some common terms related to Ansible.

Common Terms in Ansible

Control node/Controller node

This is the node where we will install Ansible and run commands (or playbooks) for configuration management in the target nodes. The only pre-requisite for setting up a controller node is to install Python on it since Ansible is built on top of Python. We can multiple controller nodes setup based on our requirements. But Windows is the exception, as it is cannot be used as a control node. You can read more about it here.

Managed nodes/Target node

Managed nodes are the devices we manage with the Ansible controller node. They are also called hosts. We do not need to set up any agent or install other components to configure a system as a managed node. To identify the managed node, the Ansible controller node simply needs the IP address.

Inventory

The inventory is a file where the details of all the managed nodes are given. It is stored in the controller node and the inventory file’s absolute path should be mentioned in the Ansible configuration file (ansible.cfg) so that Ansible can recognize the controller nodes and run commands on them.

In the inventory, we can define target nodes as individual nodes, or as groups of nodes. Defining groups of nodes allow us to run the same configuration on multiple nodes in a simplified manner by referring to them as a group.

Modules

Ansible provides us with many in-built packages called modules. Using these modules we can perform different functions like setting firewall rules, installing software/packages, creating files and directories, downloading files from URLs, administer databases, etc. Modules can be used as options in the ad-hoc commands or can be used to declare what will be performed by a task.

Tasks

Tasks allow us to perform different operations using Ansible modules. Each ad-hoc command can be considered a single task.

Playbooks

Ansible allows us to perform tasks, one at a time, using the ad-hoc commands. But we often need to perform many tasks in a particular sequence to set up the desired configuration in the target node(s). For such cases, we use Ansible Playbooks.

Using Playbooks, we can define multiple tasks in the desired order, decide the execution of some tasks based on the output of the execution of other tasks, and even decide which tasks will be performed by which group of hosts. Most importantly, Playbooks allow us to create a script that can be replicated in other systems and locations if needed, without listing out all the tasks in proper order repeatedly using ad-hoc commands.

It’s always easier to understand through practice, so we will be writing an Ansible Playbook for setting up Docker. The remaining part of the article is written assuming you know the basics of Docker and containers.

Setting up Docker using Ansible

Docker is one of the tools that implement containerization. Using Docker, we can launch different OS using publicly available or customized container images as we desire. We can set up Docker using the command line, but now we will use Ansible Playbooks to setup Docker, and launch a container to test the tool. Before we begin with the Ansible Playbook, let’s list out the pre-requisites for our set up:

  • Install Ansible on the control node. (Refer here for details on installation).
  • Install sshpass on the control node. Typically, when we execute commands remotely through SSH, we are prompted for a password for every command. sshpass allows us to avoid this.

Please note that all the steps I am explaining below have been tested on RHEL8 VMs running on Oracle Box with Windows 10 as the Base OS. I am also using one control node and one target node for this work, although you can add multiple target nodes.

After we have the pre-requisites, we modify the configuration file of ansible. The file is named ansible.cfg and is present in the /etc/ansible directory. In the configuration file, we need to give the absolute path of the inventory file. In my case, I have named the file hosts.txt and stored it in the same directory as the configuration file. The host_key_checking is important to note, and you can read more about it here.

# Contents of ansible.cfg
[defaults]
inventory= /etc/ansible/hosts.txt
host_key_checking=false

Next, we fill in the details about our hosts (target nodes) in the inventory file hosts.txt. For now, I am using only 1 target node, so there is no need for groups. The entry for every target node contains the following information:

  • IP address
  • The username of the account with which we will be remotely executing commands. This is given as the value of the ansible_user property.
  • The password of the username. We can enter this directly for now, but there are more security measures to store these passwords. This is given as the value of the ansible_password property.
  • The protocol used by the target node and the control node to communicate. This is given as the value of the ansible_connection property.
# Contents of inventory
[group_name]
<IP-addr of target nodes> ansible_user=<username> ansible_password=<password> ansible_connection=ssh

Once the inventory file has been configured, we can check our connectivity to all the target nodes from the control node using the ad-hoc command.

ansible all -m ping

This command returns the result of successful or failed connectivity between the control node and target nodes(s). An example of the output is shown below.

Let’s list out the steps in installing Docker on the RHEL8 target node

  • Configure the YUM repository
  • Download the Docker-CE version and install it
  • Launch a container with httpd web server setup
  • Copy a webpage into the default Document Root of the web server in the container

Ansible playbooks are written in the YAML format. You can check out more about the YAML syntax from Ansible’s official docs. Each playbook can contain one or more tasks. Tasks are simply the declaration of the changes we want to make in the desired target nodes, and to make these changes we can use the in-built modules of Ansible.

At the beginning of every playbook, we declare the hosts on which the tasks will be written. After declaring the hosts, we can start listing out our tasks. All the tasks are listed below as a whole. Each task can be given a name for easier code updates and maintenance later on.

Now, let’s look at the tasks!

Creating the directory and mounting the ISO file

I am using an RHEL8 VM running on the Oracle Virtual box as the target node. Every VM needs an ISO file to install the OS, and the ISO file often contains many packages for software that we can install later. In RHEL, YUM is essentially a package manager which makes installation and uninstallation of packages very easy. But YUM needs access to the directories where the packages are stored. Hence, we first mount the ISO file onto a local directory so that we can use the directory to refer to the local of the packages.

To create a directory Ansible uses the file module. To mount the ISO file, we use the mount module. Each module has its own set of arguments that we need to provide based on the task we want to perform.

Creating a directory to save copies of files from the control node

Ansible playbooks work interestingly. Even if we write and run the Playbook on the command line of the control node, what happens in the target node(s) is slightly different. When we run a playbook on the control node, the Playbook is run on the target node(s) and the outputs of the tasks executed on the target node(s) are returned to the control node and displayed as the output on the control node’s command line.

Now that we understand this, we can also say that to copy files into a Docker container running on the target node, we need to either create a new file in the target node (using the file module) or copy an existing file from the control node to the target node using the copy module.

Configuring YUM repo using packages from AppStream and BaseOS of ISO

In RHEL8 ISO, the packages are present in the AppStream and the BaseOS directories. While configuring the YUM repo, we can use the paths to these directories through the mounted directory of the ISO file.

Note that this isn’t the only way to configure the RHEL8 packages. There are other sources from which we can configure the RHEL8 packages.

To configure the YUM repo, we can use the yum module. Since there are two different locations from which we can find packages, we configure two repos through individual tasks.

Configuring YUM repo source for Docker

Docker is typically available in two versions — Community Edition and Enterprise Edition. Docker-CE cannot be directly installed from the default packages available in the RHEL8 ISO. Instead, we can use the source for CentOS, since it is also a variation of Linux like RHEL8.

After configuring the YUM repo sources, we need to run the yum repolist command. I did this manually for now on the target node, but we can ask Ansible to do this using the command module.

Installing Docker and docker-py package

Installation of any software can be done using the command line. To execute commands on the command line, we can use the command module.

Notice that we are using YUM to install Docker-CE, which means Ansible’s yum module can also be used to install it. But in our case, the yum module won’t be useful. Why?

The yum module does not have any arguments to support special options during installation. When installing Docker-CE, we need to use the --nobest option to ensure the most compatible version is installed. Since we cannot use the yum module, we can simply use the original installation command through the command module.

Since we need to launch containers using Ansible, we need Ansible to work with the Docker daemon. Since Ansible is built on Python, we need to install a Python package named docker-py.

Pulling httpd Docker image from Docker Hub and Starting Docker Container with httpd image

Ansible has in-built modules to work with many tools including Docker. To pull the httpd image available publicly on DockerHub, we use the docker_image module. We can then copy the desired files using the copy module, and finally, start the Docker container using the docker_container module.

But wait! We can’t run the playbook right away!

Before running any playbook, it is best to check if there are no errors. Ansible provides an ad-hoc command for this purpose.

ansible-playbook --syntax-check docker_playbook.yml

If the above command returns no error, then we are good to go!

To run the playbook, we use the following ad-hoc command

ansible-playbook -v docker_playbook.yml

In this command, the option -v provides a verbose output, which is essentially a more detailed output. We can give up from 1 up to 5 v’s in the form -v, -vv, -vvv,-vvvv, and -vvvvv to get increasing verbosity in the output. When the playbooks are running, we get to see outputs like the one shown below on the command line of the target node.

Figure 1. Sample output on the control node terminal

In the output above, we see that before the execution of the first task, Ansible performs the task named Gathering facts. This task will be performed at the beginning of every playbook’s execution. The task is very important as it gathers details about the host computers and then stores them in a pre-defined variable called ansible_facts. We have not used this variable yet but it is instrumental in writing tasks and setting their execution based on the host system specifications and features.

Most of the tasks give the output ok. In Ansible, this means that there is no change made by the task on the target node. Now, if we check for the running containers in the target node, we see the following output.

Figure 2. Sample output for the target node
Figure 3. Successfully accessing the webpage

Conclusion

In this article, we have seen an example of how Ansible helps us complete multiple steps of software installation and use using a single script file, which can be implemented on multiple hosts simultaneously. This is simply a taste of what Ansible is truly capable of.

If you liked the article, please clap for it!

In the next article, let’s see a similar setup — Setting up a Hadoop Cluster with Ansible.

ECE Undergrad | ML, AI and Data Science Enthusiast | Avid Reader | Keen to explore different domains in Computer Science