Launching an Web Application on AWS Cloud using Terraform , GitHub, EFS and CloudFront

AmanGoyal
6 min readApr 26, 2021

In this task we have to launch the webserver using network storage Amazon EFS on an ec2 instance . we also have to use the cloudfront and amazon s3 for proxy, faster loading and image storage purpose respectively. All this will be done using infrastructure as a service terraform.

EFS:- Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It provides a centralized storage and we can able to connect it to multiple instances which is not in case of EBS.

Task Details

Perform the task-1 using EFS instead of EBS service on the AWS as,

Create/launch Application using Terraform

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploaded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

LET’S GET STARTED!!

Let’s start with writing Terraform code

Step-1 Configure your aws Profile to access it on Command Line.

aws configure --profile aman1

Step-2 Adding AWS provider in terraform file

provider "aws" {
region = "ap-south-1"
profile = "aman1"
}

Step-3 We will now create the security group in default vpc and set the inbound and outbound rules for http , ssh and efs.

resource "aws_security_group" "my_security_group" {
name = "my_security_group"
description = "Allow HTTP inbound traffic"
vpc_id = "vpc-87819cef"ingress {
description = "SSH from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "HTTP from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}ingress {
description = "EFS-storage"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}tags = {
Name = "aman_sg"
}
}

Security Group

Step-4 Now we are going to launch an Instance with a key and install httpd , php and git in it.

resource "aws_instance" "myinstance" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "mykey"
security_groups = ["my_security_group"]connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/user/Downloads/mykey.pem")
host = aws_instance.myinstance.public_ip
}provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "my-os"
}
}

Launched Instance

Step-5 Now we are going to download github repo in our local system.

resource "null_resource" "image"{
provisioner "local-exec" {
command = "git clone https://github.com/AmanGoyal31/multicloud.git images"
}
}

Step-6 Now we will create EFS and mount it on our instance

To create

resource "aws_efs_file_system" "efs1" {
depends_on = [aws_security_group.my_security_group , aws_instance.myinstance ,]
creation_token = "EFS-file"
tags = {
Name = "efs-storage"
}
}

To attach

resource "aws_efs_mount_target" "EFS_mount" {
depends on = [aws_efs_file_system.efs1,]
file_system_id = aws_efs_file_system.efs1.id
subnet_id = aws_instance.myinstance.subnet_id
security_groups = [aws_security_group.my_security_group.id]
}

Step-7 Now we are going to store our public ip to a file so in future we can check our website by it.

output "myos_ip" {
value = aws_instance.myinstance.public_ip
}resource "null_resource" "nulllocal2" {
provisioner "local-exec" {
command = "echo ${aws_instance.myinstance.public_ip} > publicip.txt"
}
}

Step-8 Creating S3 bucket to store image in it and make it public

resource "aws_s3_bucket" "myamanefsbucket" {
bucket = "myamanefsbucket"
acl = "public-read"
tags = {
Name = "myamanefsbucket"
}
}
locals {
s3_origin_id = "s3_origin"
}resource "aws_s3_bucket_object" "object"{
depends_on = [aws_s3_bucket.myamanefsbucket,null_resource.image]
bucket = aws_s3_bucket.myamanefsbucket.bucket
acl = "public-read"
key = "sample.jpg"
source = "C:/Users/user/Pictures/sample.jpg"

}

Step-9 Now create CloudFront for CDN and Low Latency delivery

resource "aws_cloudfront_distribution" "cf_distribution" {
origin {
domain_name = aws_s3_bucket.myamanefsbucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
}

enabled = true
default_root_object = "sample.jpg"default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 10
max_ttl = 86400
}

restrictions {
geo_restriction {
restriction_type = "none"
}
}# SSL certificate for the service.
viewer_certificate {
cloudfront_default_certificate = true
}
}

Step-10 : We will now attach efs to instance , configure ,and download git repo .then we will start our service and check it by browser .

resource "null_resource" "nullremote3" {
depends_on = [aws_efs_mount_target.EFS_mount,aws_instance.myinstance,]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/user/Downloads/mykey.pem")
host = aws_instance.myinstance.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo mount -t efs -o tls '${aws_efs_file_system.efs1.dns_name}':/ /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/AmanGoyal31/multicloud.git /var/www/html/",
"sudo su << EOF",
"echo \"<img src=\"https://\"${aws_cloudfront_distribution.cf_distribution.domain_name}\"/sample.jpg\">\" >> /var/www/html/index.html",
"EOF",
"sudo systemctl restart httpd",
]
}
}
resource "null_resource" "nulllocal1" {
depends_on = [null_resource.nullremote3,]
provisioner "local-exec" {
command = "start chrome ${aws_instance.myinstance.public_ip}"
}
}

Now finally our terraform code is complete . now we will run the code by command line using following command

terraform init
terraform apply --auto-approve

Applying Terraform code

After applying TF code , using IP in browser , for checking hosted site

Hosted Site

Then after completion of task destroy complete set up by following command. it will destroy everything it created.

terraform destroy --auto-approve

While Destroying

And that’s how we have completed our task.

Special Thanks to Mr. Vimal Daga sir for enlighten us with this much knowledge .

Also you can check out my GitHub profile.

You can check out my LinkedIn profile here👆 .

That’s all with this article . Hope you Enjoy reading it.
Thanks for giving your precious time to this article✌ .

--

--