Webserver deployment on AWS using Terraform

Automating the task of launching web-server with Terraform by using Ec2,S3 and CloudFront services of AWS

Terrafrom + AWS

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Simply, Terraform is a tool for Iac(Infrastructure as code).To build and deploy the infrastructure using Terraform, a terraform configuration file needs to be created. It is a simple text file with “.tf “extension where you specify the infrastructure resource(s) you want to build. The .tf format is more human readable, supports comments and is the recommended format for the Terraform configuration files.

Task

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Prerequisites

  1. AWS account
  2. AWS CLI V2 in your host
  3. Terraform installed in your host

First u need to create a profile using AWS CLI

aws configure --profile <name>

Provide your name or any name of your wish at <name>

For detailed info : Click Here

Starting the script

I created four variables in the script. With the help of variables, anybody can run this script in any region.

variable "profile" {}
variable "region" {}
variable "availability_zone" {}
variable "key_name" {}

By using these variables, we can select our profile, region and availability zone in which we want to deploy our infrastructure.

When you run terraform apply, it asks you to enter variables.

Also, make sure your key_name is unique from your other keys as we are going to create the key here in our code.

provider "aws" {
profile = var.profile
region = var.region
}

Here, our provider is aws. We passed our profile and region using variables.

NOTE:- Before running the script, we have to run terraform init to install plugins. Running this at the beginning is enough.

Create the key and security group which allow the port 80 and port 22

resource "tls_private_key" "key" {
algorithm = "RSA"
rsa_bits = "4096"
}
resource "aws_key_pair" "generated_key" {
key_name = var.key_name
public_key = tls_private_key.key.public_key_openssh
}
resource "local_file" "key_file" {
content = tls_private_key.key.private_key_pem
filename = "${var.key_name}.pem"
file_permission = 0400
}

We need two resources to create key-pair. They are tls_private_key and aws_key_pair. Here we used RSA algorithm for our key. We passed key_name variable for the key_name parameter in aws_key_pair resource.

Also, we used local_file resource to download our key file in our local system.

Using Default VPC and Subnet

We use default VPC and subnet provided by AWS

resource "aws_default_vpc" "default" {
tags = {
Name = "Default VPC"
}
}
resource "aws_default_subnet" "default_az1" {
availability_zone = var.availability_zone
tags = {
Name = "Default subnet for ap-south-1a"
}
}

NOTE : When we destroy resource using terraform destroy command, default resources wont be destroyed.

Here we used two default resources. They are aws_default_vpc and aws_default_subnet

Create Security group which allow ports 22 and 80

In our security group we allowed two ports for inbound traffic. We allowed port 22 for ssh connection. And then we allowed port 80 for http access.

resource "aws_security_group" "sg1" {
vpc_id = aws_default_vpc.default.id
name = "allow_ssh_http"

lifecycle {
create_before_destroy = true
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

Create an Ec2 instance

We can create an Ec2 instance using aws_instance resource .

resource "aws_instance" "web_ec2" {  ami = "ami-0447a12f28fddb066" //Linux 2 AMI[Free tier eligible]
instance_type = "t2.micro"
key_name = aws_key_pair.generated_key.key_name
availability_zone = var.availability_zone
subnet_id = aws_default_subnet.default_az1.id
vpc_security_group_ids = [aws_security_group.sg1.id]
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.web_ec2.public_ip
private_key = tls_private_key.key.private_key_pem
timeout = "10m"
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
]
}

//depends_on=[aws_security_group.sg1]
tags = {
name = "webserver-ec2-instance"
}
}

After launching ec2 instance, we used connection block to ssh into our instance.

Terraform provides provisioners for running os specific commands. There are two types of provisioners.

  1. local-exec
  2. remote-exec

We use “local-exec” provisioner to run commands on our local host. We use “remote-exec” to run commands on our remote host.

In above code, we used “remote-exec” provisioner to run commands for installing git and httpd.

After installing, we started httpd server and installed our code from github by cloning git repository in /var/www/html folder.

Creating EBS volume

We created EBS volume using aws_ebs_volume resource.

resource "aws_ebs_volume" "ebs1" {
size = 1
availability_zone = var.availability_zone
encrypted = true
}

After creating we need to attach it to the instance.Then we have to format the disk and mount it to /var/www/html folder as our code resides at that location.

resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/xvdh"
volume_id = aws_ebs_volume.ebs1.id
instance_id = aws_instance.web_ec2.id
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key.private_key_pem
host = aws_instance.web_ec2.public_ip
}provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/Vishnukvsvk/LW-TASK1.git /var/www/html"
]
}

//To destroy ebs, we need to unmount the ebs from instance.
provisioner "remote-exec" {
when = destroy
inline = [
"sudo umount /var/www/html"
]
}
depends_on = [aws_instance.web_ec2]
}

We used connection block again to enter into server and then used “remote-exec” provisioner to format and mount the disk.

NOTE: While running terraform apply , terraform creates resources in an alphabetical order. So, we need to use depends_on for resources which depends on other resources.

Here, “aws_volume_attachment” resource needs an “ec2 ”instance. So atlst, we wrote depends_on block in aws_volume_attachment resource block.

Creating S3 bucket and uploading objects to it

Creating the bucket

resource "aws_s3_bucket" "bucket1" {
bucket = "task1-myimage"
acl = "public-read"
force_destroy = true
}

After creating, to upload objects, first we clone the repository in our local system and upload it to S3 bucket

resource "null_resource" "git_download" {
provisioner "local-exec" {
command = "git clone https://github.com/Vishnukvsvk/LW-TASK1.git Folder1"
}
provisioner "local-exec" {
when = destroy
command = "rmdir Folder1 /s /q"
}
}resource "aws_s3_bucket_object" "image_upload" {
key = "image1.png"
bucket = aws_s3_bucket.bucket1.bucket
source = "Folder1/task1image.png"
acl = "public-read"
content_type = "image/png"
depends_on = [aws_s3_bucket.bucket1]
}

Create a Cloudfront using s3 bucket(which contains images)

Terraform code for CloudFront Distribution

locals {
s3_origin_id = "S3-task1-myimage"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
//Origin Settingd
origin {
domain_name = "${aws_s3_bucket.bucket1.bucket_domain_name}"
origin_id = "${local.s3_origin_id}"
} enabled = true
is_ipv6_enabled = true
restrictions {
geo_restriction {
restriction_type = "none"
}
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
}viewer_certificate {
cloudfront_default_certificate = true
}

depends_on = [aws_volume_attachment.ebs_att]
}

Updating index.html with Cloudfront URL

NOTE: We need a resource for provisioner. Here, we need to connect to ec2 and update the file. So we used a provisioner and for that we need a resource. So we used null_resource

resource "null_resource" "update_link" {
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key.private_key_pem
host = aws_instance.web_ec2.public_ip
port = 22
}
provisioner "remote-exec" {
inline = [
"sudo chmod 777 /var/www/html -R",
"sudo echo \"<img src='http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image_upload.key}'>\" >> /var/www/html/index.html",
]
}
depends_on = [aws_cloudfront_distribution.s3_distribution]
}

Checking outputs

In order to get values like instance_id,volume_id etc, we can use outputs.

Here are some which I used.

output "vpc_" {
value = aws_default_vpc.default.id
}
output "subnet_" {
value = aws_default_subnet.default_az1.id
}
output "publicip_" {
value = aws_instance.web_ec2.public_ip
}
output "ebs_" {
value = aws_ebs_volume.ebs1.id
}
output "ec2_" {
value = aws_instance.web_ec2.id
}
output "domainname_" {
value = aws_s3_bucket.bucket1.bucket_domain_name
}

Finally , the script is completed.

Outputs:-

Ec2 and Security Group

S3

CloudFront

Github link for the comlete code :- https://github.com/Vishnukvsvk/Terraform-Aws-ServerDeployment

Important practices:-

  1. Run terraform validate after completing the code to check for any errors
  2. Remember null_resource is created first and then other resources are created next in alphabetical order.
  3. So, use depends_on in resources which are dependent to maintain order
  4. Provide unique key_name.
  5. CloudFront distribution takes more time for creating.So dont add any connection block in it.Use null resource for additional blocks and make it dependent on CloudFront distribution.

Thank you Vimal Sir for this amazing task.

To connect with me in linkedin : Click Here

Tech Explorer