Working with AWS, Terraform and GitHub — Part II
Integration of AWS, Terraform, and GitHub for Automated Deployment Infrastructure
Cloud computing is the delivery of different services through the Internet. These resources include tools and…
In a previous article, I gave a glimpse into how we can create different resources provided by Amazon through AWS, using Terraform. We saw the use of services like EC2, EBS, S3, CloudFront, etc. In this article, we will be looking at another storage service offered by AWS called EFS. We will also work with the concept of modules, input variables, and output values in Terraform.
Choosing the right AWS Storage Service
Amazon offers a variety of storage services like Elastic Block Storage (EBS), Elastic File System (EFS), and S3. Each of these services has its own set of advantages and areas of use. In my previous article, I used the EBS storage service to provide persistent storage to the web content stored in the EC2 instance that I launched. However, this works only if the webpage being used must be accessible by only one EC2 instance.
What if our requirements change and we need to allow the same website to be accessed by multiple EC2 instances simultaneously?
In such scenarios, we can use the Elastic File System (EFS), which can be mounted to different AWS services and accessed from multiple instances simultaneously. EFS also offers other additional benefits:
a) It offers throughput as per the demand of the workload.
b) It can autoscale the storage as per requirements. This also saves cost as the user only needs to pay for the storage being used.
c) It is a service completely managed by Amazon, which means the user does not need to worry about any issues with the file system. They will be handled by AWS itself.
d) It also offers data encryption and different levels of access to the file system of the user. This increases the security of the data.
As with the other Amazon Services, EFS can also be provisioned using Terraform.
Modules, Input Variables and Output Values in Terraform
In my previous work, we saw that AWS resources can be provisioned using Terraform by writing down our requirements in HCL in file with .tf extension. The code we have written till now is small, with a lesser number of resources being provisioned, hence it was quite comfortable to write everything in a single file. However, in real-world applications, the number of resources being provisioned can be very large, with complex configurations and even repetitions of the same resource being configured time and again.
For such requirements, it is advantageous to split the code into multiple files and integrate them during provisioning in the main Terraform code file.
When the code is split into multiple files, we can use the concept of modules to ‘call’ for the provisioning of the required resources defined in the module, inside a ‘main’ Terraform file. Every time a module is called in this manner, one instance of the resource is created.
Modules work like ‘black boxes’, where the underlying complexity of defining the resource configuration is hidden, and the user simply needs to give inputs that will be used in provisioning the resource. Users can give inputs using the concept of input variables. Since modules work like ‘black boxes’, their inner parameters (resource properties) cannot be accessed directly. This brings us to the concept of output values in Terraform, which can pass out values from within the module.
Integrating AWS Services
Now, we will begin integrating different services provided by AWS, by writing code in Terraform. The services we will be integrating are EC2, EFS, S3, and CloudFront. By splitting the code for provisioning each resource, we can reduce the code volume and complexity in the ‘main’ Terraform file. Terraform looks for the definition of the resource provider to choose the main file. In our case, we will define the file as ‘main.tf’. When splitting the code based on the resource being provisioned, we can see a directory tree as shown below. Note that the files terraform.tfstate, terraform.tfstate.backup, and the hidden folder .terraform are created after at least one successful provisioning of the resource(s).
a) Creating the Security Group (In the file sg.tf)
Firstly, we create the security group needed for both the EFS as well as the EC2 instance. The security group allows all inbound traffic on the ports 80 (for HTTP), 22 (for SSH access), and 2049 (For NFS protocol). It also allows all outbound traffic.
b) Creating the EC2 instance (In the file ec2.tf)
Next, we create the EC2 instance by calling the security group’s module.
c) Creating the EFS storage and mounting it on the EC2 instance (In the file efs.tf)
The configuration for creating the EFS storage and mounting it on to the /var/www/html directory of the EC2 instance is given in a single file with the name efs.tf.
d) Creating S3 Storage for the image (In the file s3.tf)
e) Creating CloudFront to provide secure access to the image stored in S3 (In the file cf.tf)
f) Integrating all configurations (In the file main.tf)
Now, we can create each resource by calling their respective modules in the root module, which is the file main.tf. Note that we don’t need to call the module for the security group separately, as it is already called in the creation of the EC2 instance. Calling a module for the creation of resources with the same identifiers (properties like id, name, etc.) will give errors during the provisioning of the resources. Since the file main.tf is the root module, we also give the details of the resource providers at the start.
In this file, we also include a null resource through which we establish an SSH connection to the EC2 instance. This is done so that we can insert the image we have stored in the S3 bucket into the index.html webpage. Once the image is inserted, the webpage is also automatically launched using the Chrome web browser.
A Look at the Resources Provisioned on the AWS Web Console
Once all the resources have been successfully provisioned, we can see their status and other information on the AWS Web Console.