TERRAFORM integration with AWS

AWS CLOUD+TERRAFORM+GIT:-End to End Static Website Automated Development Using Terraform with AWS

AmanGoyal

--

Terraform is tool for managing infrastructure as code. It has been a great success in extending the infrastructure knowledge to more team members .
We’ll use Terraform with integration of AWS_cloud and GIT HUB .

This is actually an automated system which in just one click creates a instance with volume attached , make partitions and access a code which is in the github and shows it on the webserver. A very beautiful system just do it for once and experience the beauty of the automation.

PROBLEM STATEMENT->LAUNCH APPLICATION USING TERRAFORM

1.Create the key and security group which allow the port 80.
2. Launch EC2 instance using e key and security group which we have created
3. Launch one Volume (EBS) and mount that volume into /var/www/html
4. Copy the github repo code into /var/www/html
5. Create S3 bucket, and copy into the s3 bucket and change the permission to public readable.

6 .Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

SOLUTION:-

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. … For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.

provider “aws” {
region = “ap-south-1”
profile = “aman”
}

#create security group

resource “aws_security_group” “allow_tls” {
name = “launch-wizard-8”
description = “Allow http and ssh”
ingress {
description = “SSH Port”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “HTTPD port”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “local host”
from_port = 8080
to_port = 8080
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “Custom tcp”
from_port = 81
to_port = 81
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “allow_tls”
}
}

LAUNCHING EC2 INSTANCE USING THIS KEY AND SECURITY GROUP

Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.

resource “aws_instance” “web” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = “mykey”
security_groups = [ “launch-wizard-8” ]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/user/Downloads/mykey.pem”)
host = aws_instance.web.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”
]
}

tags = {
Name = “linuxOS”
}

}

CREATING EBS VOLUME

Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale.

resource “aws_ebs_volume” “ebs_volume1” {
availability_zone = aws_instance.web.availability_zone
size = 1

tags = {
Name = “linux_volume_1”
}
}

ATTACH THE VOLUME WITH O.S.

resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.ebs_volume1.id
instance_id = aws_instance.web.id
force_detach = true
}

ATTACH THE VOLUME WITH FOLDER /VAR/WWW/HTML AND DOWNLOAD THE CODE FROM GITHUB.

resource “null_resource” “harddisk” {
depends_on = [
aws_volume_attachment.ebs_att,
]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/user/Downloads/mykey.pem”)
host = aws_instance.web.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone
https://github.com/square/square.github.io.git /var/www/html/”
]
}

}

CREATE S3 BUCKET AND PUT THE STATIC FILES AND FOLDER IN IT..

S3 is the only object storage service that allows you to block public access to all of your objects at the bucket or the account level with S3 Block Public Access.

resource “aws_s3_bucket” “bucket” {

depends_on = [
null_resource.harddisk,
]
bucket = “my-tf-test-bucket-myweb-2”
acl = “public-read”
tags = {
Name = “web_bucket”
}
force_destroy = true

provisioner “local-exec” {

command = “aws s3 sync C:/Users/user/Desktop/new s3://my-tf-test-bucket-myweb-2 — acl public-read — profile aman “

}

}

CREATE A CLOUDFRONT USING S3 BUCKET

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS — both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications.

locals {
s3_origin_id = “myS3Origin”
}

resource “aws_cloudfront_distribution” “s3_distribution” {
origin {
domain_name = aws_s3_bucket.bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
}

enabled = true
is_ipv6_enabled = true
comment = “Some comment”
default_root_object = “index.html”

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = “/content/immutable/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”, “OPTIONS”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false
headers = [“Origin”]

cookies {
forward = “none”
}
}

min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = “redirect-to-https”
}

# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = “/content/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = “redirect-to-https”
}

price_class = “PriceClass_200”

restrictions {
geo_restriction {
restriction_type = “whitelist”
locations = [“IN”, “CA”, “GB”, “DE”]
}
}

tags = {
Environment = “production”
}

viewer_certificate {
cloudfront_default_certificate = true
}
}

SAVE THE PUBLIC IP A FILE “publicip.txt”

resource “null_resource” “nulllocal2” {

depends_on = [
null_resource.harddisk,
]
provisioner “local-exec” {
command = “start chrome ${aws_instance.web.public_ip}”
}
}

GIVE THE PUBLIC IP OF OS

output “myos_ip” {
value = aws_instance.web.public_ip
}

WHEN EVERYTHING GOES RIGHT

OUTPUT(FROM INITIALIZING TERRAFORM TO END 2 END AUTOMATION) WHEN EVERYTHING GOES WELL .

HOSTED WEBSITE:-

So, finally, we come to the conclusion. I am very very thankful of Vimal Daga
sir who is providing us the unbeatable knowledge. Thank you for helping students discover a passion for things they never even knew they liked.
If you learn something from this article give me some inspiration at my Linked profile AMAN GOYAL and check my GITHUB.

--

--