Working with AWS, Terraform and GitHub — Part II

Sources: AWS Logo, Terraform Logo

In a previous article, I gave a glimpse into how we can create different resources provided by Amazon through AWS, using Terraform. We saw the use of services like EC2, EBS, S3, CloudFront, etc. In this article, we will be looking at another storage service offered by AWS called EFS. We will also work with the concept of modules, input variables, and output values in Terraform.

Choosing the right AWS Storage Service

Amazon offers a variety of storage services like Elastic Block Storage (EBS), Elastic File System (EFS), and S3. Each of these services has its own set of advantages and areas of use. In my previous article, I used the EBS storage service to provide persistent storage to the web content stored in the EC2 instance that I launched. However, this works only if the webpage being used must be accessible by only one EC2 instance.

What if our requirements change and we need to allow the same website to be accessed by multiple EC2 instances simultaneously?

In such scenarios, we can use the Elastic File System (EFS), which can be mounted to different AWS services and accessed from multiple instances simultaneously. EFS also offers other additional benefits:

a) It offers throughput as per the demand of the workload.

b) It can autoscale the storage as per requirements. This also saves cost as the user only needs to pay for the storage being used.

c) It is a service completely managed by Amazon, which means the user does not need to worry about any issues with the file system. They will be handled by AWS itself.

d) It also offers data encryption and different levels of access to the file system of the user. This increases the security of the data.

As with the other Amazon Services, EFS can also be provisioned using Terraform.

Modules, Input Variables and Output Values in Terraform

In my previous work, we saw that AWS resources can be provisioned using Terraform by writing down our requirements in HCL in file with .tf extension. The code we have written till now is small, with a lesser number of resources being provisioned, hence it was quite comfortable to write everything in a single file. However, in real-world applications, the number of resources being provisioned can be very large, with complex configurations and even repetitions of the same resource being configured time and again.

For such requirements, it is advantageous to split the code into multiple files and integrate them during provisioning in the main Terraform code file.

When the code is split into multiple files, we can use the concept of modules to ‘call’ for the provisioning of the required resources defined in the module, inside a ‘main’ Terraform file. Every time a module is called in this manner, one instance of the resource is created.

Modules work like ‘black boxes’, where the underlying complexity of defining the resource configuration is hidden, and the user simply needs to give inputs that will be used in provisioning the resource. Users can give inputs using the concept of input variables. Since modules work like ‘black boxes’, their inner parameters (resource properties) cannot be accessed directly. This brings us to the concept of output values in Terraform, which can pass out values from within the module.

Integrating AWS Services

Now, we will begin integrating different services provided by AWS, by writing code in Terraform. The services we will be integrating are EC2, EFS, S3, and CloudFront. By splitting the code for provisioning each resource, we can reduce the code volume and complexity in the ‘main’ Terraform file. Terraform looks for the definition of the resource provider to choose the main file. In our case, we will define the file as ‘’. When splitting the code based on the resource being provisioned, we can see a directory tree as shown below. Note that the files terraform.tfstate, terraform.tfstate.backup, and the hidden folder .terraform are created after at least one successful provisioning of the resource(s).

The directory structure for all the code files

a) Creating the Security Group (In the file

Firstly, we create the security group needed for both the EFS as well as the EC2 instance. The security group allows all inbound traffic on the ports 80 (for HTTP), 22 (for SSH access), and 2049 (For NFS protocol). It also allows all outbound traffic.

b) Creating the EC2 instance (In the file

Next, we create the EC2 instance by calling the security group’s module.

c) Creating the EFS storage and mounting it on the EC2 instance (In the file

The configuration for creating the EFS storage and mounting it on to the /var/www/html directory of the EC2 instance is given in a single file with the name

d) Creating S3 Storage for the image (In the file

e) Creating CloudFront to provide secure access to the image stored in S3 (In the file

f) Integrating all configurations (In the file

Now, we can create each resource by calling their respective modules in the root module, which is the file Note that we don’t need to call the module for the security group separately, as it is already called in the creation of the EC2 instance. Calling a module for the creation of resources with the same identifiers (properties like id, name, etc.) will give errors during the provisioning of the resources. Since the file is the root module, we also give the details of the resource providers at the start.

In this file, we also include a null resource through which we establish an SSH connection to the EC2 instance. This is done so that we can insert the image we have stored in the S3 bucket into the index.html webpage. Once the image is inserted, the webpage is also automatically launched using the Chrome web browser.

A Look at the Resources Provisioned on the AWS Web Console

Once all the resources have been successfully provisioned, we can see their status and other information on the AWS Web Console.

EC2 instance
EFS Storage
S3 Bucket with the Image as S3 Bucket Object
CloudFront Distribution
Final Webpage output




ECE Undergrad | ML, AI and Data Science Enthusiast | Avid Reader | Keen to explore different domains in Computer Science

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

The complete guide to setup a CI/CD for Rails 5+ on Gitlab

Previews and Meta Tags: A Short Guide

IoT Platform – MainFlux

Making a game with NEO + Unity: Part 2

How to write performant queries in Spark notebooks fetching data from Synapse SQL

Policy-based Data Filtering Solution Using Partial Evaluation

Week 9

Use Autofac IoC Container in ASP.NET or ASP.NET Core

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Akshaya Balaji

Akshaya Balaji

ECE Undergrad | ML, AI and Data Science Enthusiast | Avid Reader | Keen to explore different domains in Computer Science

More from Medium

What is IaC, and why did I learn Terraform?

Deploy AWS ECS-EC2 using Terraform

Manage your container using -Kubernetes

Creating a customized S3 bucket with Terraform