Integration of AWS, Terraform, and GitHub for Automated Deployment Infrastructure

Cloud computing is the delivery of different services through the Internet. These resources include tools and applications like data storage, servers, databases, networking, and software. As the requirements of both large-scale companies and businesses grow larger every day, there are different types of cloud services being offered. AWS belongs to the public type of cloud computing service provider. By integrating with the powerful management tool Terraform, and the SCM and version control systems Git and GitHub, we can build an almost completely automated infrastructure for deploying the services and/or products for the clients.

In this article, we will be working with a sample scenario of integrating services of AWS, Terraform, and GitHub based on the following requirements.

1. Create the key and security group which allows the port 80 for HTTP ingress.

2. Launch an EC2 instance. In the EC2 instance, use the key and security group which we have created in Step 1.

3. Launch one Volume (EBS) and mount that volume into the /var/www/html directory of the EC2 instance.

4. Consider a GitHub repository containing some images. The images need to be copied into /var/www/html directory of the EC2 instance.

5. Create an S3 bucket, and copy/deploy the images from the GitHub repository into the S3 bucket and change the permission to public readable.

6. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html directory of the EC2 instance.

AWS offers the use of both Web UI and CLI methods to set up the required infrastructure, but it is always preferable to use full-fledged code-files where all the specifications are written. This makes it easy to replicate our test environments for the client(s) when the product and/or service is ready to be deployed on the environment. Hence, we will configure all the required AWS services using Terraform.

The remaining parts of the article are explained under the following considerations:

  1. The base OS used is Windows 10.
  2. Windows has OpenSSH and the ‘aws’ command installed and configured as PATH variables.
  3. Third-party software PuTTY and PuTTYGen are downloaded.

The first step is to create a workspace for our work to set up the infrastructure. This is done through the command line in Windows. The current directory is then shifted to the workspace and we can begin writing the configuration file. The configuration file has the extension ‘.tf’ and it uses different types of keywords, attributes, and syntaxes, which makes the official documentation of Terraform for AWS very important to refer to at each stage. Hence, a list of important resources to understand the code better has been given at the end of the article.

Firstly, we are using the services of AWS, hence the ‘provider’ is configured as ‘aws’ as shown below.

The first requirement is to create a key and security group. A key is a means of authentication, similar to the username-password system that we often use, but is much more secure. Security groups are essentially sets of rules that we can define for the services like EC2 instances to control the protocols with which they can communicate with. Since we are focusing on HTML documents, we need to allow HTTP protocol and we will also need the SSH protocol for remotely controlling the EC2 instance OS we will be creating. These objectives can be achieved with the following lines of code.

In the code above, the required private and public keys are created directly in Terraform itself. However, there are alternative methods to generate the pair of keys using third-party software or by using the ssh-keygen command on the Windows CMD. After the keys are created and the private PEM key is stored, the security group is created. The ingress packets are allowed to communicate using the SSH protocol through port 22 and the HTTP protocol through port 80. The vpc-id used is taken from the default VPC ID given when the AWS account is created.

Next, we create the EC2 instance, which is essentially an environment developed from a basic Operating System. The code to create the EC2 instance is shown below.

Here, we see the attribute depends_on . Some resources take more precedence in being built first as compared to others, and these internal precedences can interfere with the infrastructure we are trying to build. The attribute is typically used to provide an order in which the blocks of code are executed, as they are not always executed in the order in which they have been written. This ensures that all the required infrastructure is set up before building more complex parts from the basic components developed until then.

The next objective is to create additional storage using the EBS service. The main motive of the additional storage is to provide persistent data storage. This means that even if the EC2 instance is shut down, we will not lose the data we have stored in the additional storage volume. We can link the EC2 instance and the additional EBS volume by mounting it onto a given directory in the EC2 instance and then storing the required files. These objectives can be accomplished by the following code.

In the code above, we first define the EBS storage. Once the EBS storage has been created, we then attach it to the EC2 instance. A new attribute which is not exclusive to aws_volume_attachment is the connection attribute. This attribute is to used to initiate remote login into the EC2 instances. We require this remote login because we need to mount the EBS storage and format it as well. The remote access is facilitated by connectionattribute, and the commands are given using the provisioner which is a block of code within any given resource and is mainly meant to execute commands defined within the inline attribute. While mounting and formatting, we also download the HTML document and the other images into the mounted storage.

Next, we create the S3 bucket, which is essentially a storage service like EBS with other added benefits. We also add the sample images we will be using in the HTML document for testing.

The next step is to create the CloudFront, which is used to expose the S3 data to the outside world in an efficient manner. An important point to consider is — why do we need CloudFront when we give public access to the S3 buckets? This is because CloudFront also offers additional security for the content being accessed from the S3 bucket(s). The CloudFront service instance is created using the code below.

After the CloudFront is created, we can now access the image through the CloudFront domain name, and we can use the same to place the image within the standard HTML document index.html. This is done by the following code.

In the code snippet above, the null_resource is used as every provisioner must be written within the block scope of a resource. If we do not have any particular resource with which the provisioner can be attached, we can opt for the null_resource .

All the code snippets above are written together inside one single Terraform file with the extension ‘.tf’.

Once the file has been written and saved, we need to perform some configurations on the Windows CMD to successfully run the file. The set of commands executed below must be run within the workspace of the Terraform configuration file. Firstly, we need to configure the AWS user with the access key and secret key using aws configure --profile <profile_name> . The profile name is the same as that mentioned in the profile attribute of the provider code block.

Once the AWS user profile is configured, we then initialize Terraform using terraform init . This allows all the plugins needed by Terraform to run the configuration file we have written. Finally, we can validate and execute the code as shown below. terraform validate validates the codes in terms of the syntax and terraform apply executes the configuration file. -auto-approve is an additional option that allows us to skip the interactive questions that come up during the execution to confirm our intention of setting up the infrastructure.

To remove all the created infrastructure, we can use a single command on the command line as follows.

Based on the executions done above, we can expect outputs like the ones below.

ECE Undergrad | ML, AI and Data Science Enthusiast | Avid Reader | Keen to explore different domains in Computer Science

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store