Create/Launch and application using Terraform

Backbencher05
6 min readJun 28, 2021

Pre-Requisites:

  • AWS knowledge
  • AWS CLI with Amazon AWS account
  • Terraform CLI
  • GitHub Repo

Problem Statement

  1. Create the private key and security group which allows the port 80.
  2. Launch Amazon AWS EC2 instance.
  3. In this EC2 instance use the key and security group which we have created in step 1 to log-in remote or local.
  4. Launch one Volume (EBS) and mount that volume into /var/www/html
  5. The developer has uploaded the code into GitHub repo also the repo has some images.
  6. Copy the GitHub repo code into /var/www/html
  7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
  8. Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Let’s start with the task!

Step 1: Configuring AWS with Terraform

After creating a profile on AWS using our CLI, we’ve to write the Terraform code. Here, we’ll make a folder and keep everything in that folder. So, after giving any name to the file and keeping the extension ‘.tf’, first we’ve to provide the AWS provider so that our Terraform code will know to contact to which service it wants! (In this case, we’re providing Terraform the AWS provider so that it can interact with Amazon web services)To know more about Providers, click here.

provider “aws” {
region = “ap-south-1”
access_key = “***************************”
secret_key = “****************************”
}

Step 2: Creating private key and generating key-value pair

To create a private key, we’ll use Terraform’s tls_private_key resource. After creating the private key, we’ll generate the key-pair using aws_key_pair resource which depends on our private key!

// Create key-pair

//1. Creating a private key

resource “tls_private_key” “my_key” {
algorithm = “RSA”
}

resource “local_file” “private_key” {
content = tls_private_key.my_key.private_key_pem
filename = “mykeyfromtf.pem”

}

// generating/Creating Key-value pair

resource “aws_key_pair” “keypair” {
key_name = “t1keypair”
public_key = tls_private_key.my_key.public_key_openssh
depends_on = [
tls_private_key.my_key
]
tags = {
Name = “keytf”
}
}

output “inst_op”{
value = aws_key_pair.keypair
}

Step 3: Creating the security group

The security group in AWS acts as a firewall, after setting up the inbound and outbound rules it will allow connections from and to particular ip addresses. We’ll use aws_security_group recource to create the secutiry group.

// create Security Group

resource “aws_security_group” “my_security” {
name = “attach_to_my_os”
description = “For allowing port 22 and port 80”

ingress {
description = “SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
description = “HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “my_security_tf”
}

}
output “mysg”{
value = aws_security_group.my_security
}

Let launch one Ec2 instance and configure Apache web Server on it also deploy our webpage

Step 4: Creating our EC2 instance

We always need an instance so that we can write our code there and then can use it to deploy our application. So, to do so we can use any instance but here I’ve used “Amazon Linux 2 AMI” with t2.micro instance type. We’ll use aws_instance resource to create our AWS instance.

// Launch Instance

//for launching instance

resource “aws_instance” “terra” {
ami = “ami-010aff33ed5991201”
instance_type = “t2.micro”
root_block_device {
volume_size = 8
volume_type = “gp2”
delete_on_termination = true
}
security_groups = [aws_security_group.my_security.name]
key_name = aws_key_pair.keypair.key_name

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.my_key.private_key_pem
host = aws_instance.terra.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd git php -y”,
“sudo systemctl start httpd” ,
“sudo systemctl enable httpd”,

]
}

tags = {
Name = “myos”
}

depends_on = [
aws_key_pair.keypair
]
}

output “availability_zone”{
value = aws_instance.terra.availability_zone
}

we have launched one ec2(OS) have Root HD attached but the problem is, if we store our data in root HD and if our HD corrupt we loose our data , as the most critical thing is our data , we install the OS easily but won’t get data again , so we have to attach extra HD like pen-drive(EBS volume)

Step 5: Creating and attaching the EBS volume to the Instance

To create an EBS volume, we’ll use aws_ebs_volume resource. After creating the EBS volume, all that left is attaching to the EC2 instance and mounting it to the /var/www/html folder. So to do so, we’ll use terraform’s aws_volume_attachement resource.

// Create EBS volume

resource “aws_ebs_volume” “ebs” {
availability_zone = aws_instance.terra.availability_zone
size = 1

tags = {
Name = “ebs_for_terra”
}
depends_on = [
aws_instance.terra
]
}

resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.ebs.id
instance_id = aws_instance.terra.id
force_detach = true //help when we unmount the volume
}

Step 6: mount and deploy code, Write the code and upload to GitHub

// here we are creating null resource because we already
// attached the volume and mounted on /var/www/html
//i.e resouce already run (infrastructure is up-to-date)

resource “null_resource” “nullremote4” {

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.my_key.private_key_pem
host = aws_instance.terra.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/sdh”,
“sudo mount /dev/sdh /var/www/html” ,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/Backbencher05/HMCtask.git /var/www/html/”

]
}

depends_on = [
aws_volume_attachment.ebs_att
]

}

resource “null_resource” “nullremote5” {

provisioner “local-exec” {
command = “chrome http://IP/index.html"
}

}

Our EBS volume is attached and mount successfully , now if our OS goes down we won’t loose our data, but EBS volume is also a device it may also corrupt so we loose everything.

As the most critical thing is our data , we get

1. OS easily

2. we get or code easily present in some centralized storage like GitHub

but if we loose our data , static data like images and videos we won’t get back , to AWS have on service called S3 which give to guarantee to store your data and have availability and durability 99.99999999999 approx ,so let’s create S3 bucket and store our data.

Step 7: Creating S3 bucket and uploading an image

We’ll use S3 bucket static content of the webpage to create the bucket using Terraform’s aws_s3_bucket resource.

// creating S3 bucket

resource “aws_s3_bucket” “task1buckets” {
bucket = “my-tf-test-bucket54345”
acl = “private”

tags = {
Name = “mynewbucket”

}
}

output “aboutS3”{
value = aws_s3_bucket.task1buckets
}

// upload the object

resource “aws_s3_bucket_object” “myobjects” {
bucket = aws_s3_bucket.task1buckets.bucket
key = “aditya.jpg”
source = “E:/mok/DCIM/Camera/abc.jpg”
content_type = “image/jpg”
acl = “private”
depends_on = [
aws_s3_bucket.task1buckets
]

}

Now , till here every thing is good , but if anyone access our website it first come to particular location where our website is running , give latency to get data, so AWS have one service to deliver our content very fast , has the concept of edge location , and the name of service is Cloud Front have Content Delivery Network(CDN) approach.

We also don’t want , anyone can access my data directly present in the S3 ,I want client access my data through CDN , make your S3- bucket and object private and only allow Cloud Front to access your bucket.

Step 8: Creating the Cloud Front distribution

We’ll use Terraform’s aws_cloudfront_distribution resource.

// creating cloud front distribution

locals {
s3_origin_id = “s3task”
}

resource “aws_cloudfront_distribution” “s3_distributions” {
origin {
domain_name = aws_s3_bucket.task1buckets.bucket_regional_domain_name
origin_id = local.s3_origin_id
}

enabled = true
is_ipv6_enabled = true

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

restrictions {
geo_restriction {
restriction_type = “none”
}
}

tags = {
Environment = “tasks1”
}

viewer_certificate {
cloudfront_default_certificate = true
}

depends_on = [
aws_s3_bucket_object.myobjects
]
}

Step 9: Adding the image to our WebPage

// adding image to the webpage

resource “null_resource” “cluster” {
depends_on = [
aws_instance.terra, aws_cloudfront_distribution.s3_distributions, aws_volume_attachment.ebs_att
]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.mykey.private_key_pem
host = aws_instance.terra.public_ip
}

provisioner “remote-exec” {
inline = [
“echo ‘<img src=’https://${aws_cloudfront_distribution.s3_distributions.domain_name}/abc.jpg'>' | sudo tee -a /var/www/html/index.php”
]
}
}

we have created every thing using terraform , become IaaC (Infra structure as Code)

see github where complete code is present

feel free to ask any question.

Thank you

Aditya kumar Soni ( Backbencher05)

github: https://github.com/Backbencher05/HMCtask.git

--

--