Launching Wordpress using Amazon EC2,EFS,S3,CloudFront automated with Terraform

Launching wordpress in AWS using EFS rather than EBS for storage services.


  1. Create the key and security group which allow the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1.
  4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
  5. Developer have uploded the code into github repo also the repo has some images.
  6. Copy the github repo code into /var/www/html
  7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

NOTE: Here I used EFS for storage in Wordpress instance.In my previous article “Webserver deployment on AWS using Terraform”, I used EBS. If you read my previous article, you can skip to EFS part. And if you didn’t,no worries.I explained everything in detail present in the above task .


  1. AWS account
  2. AWS CLI V2 in your host
  3. Terraform installed in your host

First u need to create a profile using AWS CLI

aws configure --profile <name>

Provide your name or any name of your wish at <name>

For detailed info : Click Here

Starting the script

I created four variables in the script. With the help of variables, anybody can run this script in any region.

variable "region" { default = "ap-south-1" }
variable "profile" { default = "default" }
variable "availability_zone" { default = "ap-south-1a" }
variable "key_name" { default = "kkey" }

By using these variables, we can select our profile, region and availability zone in which we want to deploy our infrastructure. Here I used default variables so it doesn’t prompts in command line.

Make sure your key_name is unique from your other keys as we are going to create the key here in our code.

provider "aws" {
profile = var.profile
region = var.region

Here, our provider is aws. We passed our profile and region using variables.

NOTE:- Before running the script, we have to run terraform init to install plugins. Running this at the beginning is enough.

Create the key and security group which allow the port 80 and port 22

resource "tls_private_key" "key" {
algorithm = "RSA"
rsa_bits = "4096"
resource "aws_key_pair" "generated_key" {
key_name = var.key_name
public_key = tls_private_key.key.public_key_openssh
resource "local_file" "key_file" {
content = tls_private_key.key.private_key_pem
filename = "${var.key_name}.pem"
file_permission = 0400

We need two resources to create key-pair. They are tls_private_key and aws_key_pair. Here we used RSA algorithm for our key. We passed key_name variable for the key_name parameter in aws_key_pair resource.

Also, we used local_file resource to download our key file in our local system.

Using Default VPC and Subnet

We use default VPC and subnet provided by AWS

resource "aws_default_vpc" "default" {
tags = {
Name = "Default VPC"
resource "aws_default_subnet" "default_az1" {
availability_zone = var.availability_zonetags = {
Name = "Default subnet for ap-south-1a"

NOTE : When we destroy resource using terraform destroy command, default resources wont be destroyed.

Here we used two default resources. They are aws_default_vpc and aws_default_subnet

Create Security group which allow ports 22 and 80

In our security group we allowed two ports for inbound traffic. We allowed port 22 for ssh connection. And then we allowed port 80 for http access.

resource "aws_security_group" "sg1" {
vpc_id =
name = "allow_ssh_http"

lifecycle {
create_before_destroy = true
}ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [""]

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [""]
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [""]

Creating EC2 instance

We can create an EC2 instance using aws_instance resource .

resource "aws_instance" "web_ec2" {  
ami = "ami-0447a12f28fddb066" //Linux 2 AMI[Free tier eligible]
instance_type = "t2.micro"
key_name = aws_key_pair.generated_key.key_name
availability_zone = var.availability_zone
subnet_id =
vpc_security_group_ids = []connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.web_ec2.public_ip
private_key = tls_private_key.key.private_key_pem
timeout = "10m"
}provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
"sudo yum install amazon-efs-utils -y",
"sudo yum install nfs-utils -y",

tags = {
name = "webserver-ec2-instance"

After launching ec2 instance, we used connection block to ssh into our instance.

Terraform provides provisioners for running os specific commands. There are two types of provisioners.

  1. local-exec
  2. remote-exec

We use “local-exec” provisioner to run commands on our local host. We use “remote-exec” to run commands on our remote host.

In above code, we used “remote-exec” provisioner to run commands for installing git and httpd.

After installing, we started httpd server .For using EFS, your instance need to install amazon-efs-utils and nfs-utils.

What is Amazon Elastic File System(EFS)?

EBS is a Block Storage while EFS is a file storage. EFS is fully managed NFS file system. We prefer EFS over EBS for hosting because EFS is centralized in nature. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances.

For example, if we have two wordpress instances, we need to create two EBS volumes and then attach it to the instances. Also the two EBS volumes are independent. So, there is no synchronization of data. So, its not at all suitable for hosting.

But in the case of EFS, it is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. So all your wordpress instances can connect to one single EFS system.

NOTE: EFS depends on VPC and Security Groups. Here, I am using default VPC which is already created for every region.So, here my EFS is only dependent on security group.

Creating EFS

resource "aws_efs_file_system" "efs" {
creation_token = "w_efs"
depends_on = [aws_security_group.sg1]
tags = {
Name = "Wordpress-EFS"

Attaching EFS to EC2

resource "aws_efs_mount_target" "mount_efs" {
depends_on = [aws_efs_file_system.efs, aws_instance.web_ec2]
file_system_id =
subnet_id =

Downloading code into EFS from github

First you have to login into instance, configure efs with efs_file_system id and then install code from github.

resource "null_resource" "newlocal" {
depends_on = [
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key.private_key_pem
host = aws_instance.web_ec2.public_ip
provisioner "remote-exec" {
inline = [
"sudo chmod ugo+rw /etc/fstab",
"sudo echo '${}:/ /var/www/html efs tls,_netdev' >> /etc/fstab",
"sudo mount -a -t efs,nfs4 defaults",
"sudo rm -rf /var/www/html/*",
"sudo git clone /var/www/html",

Creating S3 bucket and uploading objects to it

Creating the bucket

resource "aws_s3_bucket" "bucket1" {
bucket = "task1-myimage"
acl = "public-read"
force_destroy = true

After creating, to upload objects, first we clone the repository in our local system and upload it to S3 bucket

resource "null_resource" "git_download" {
provisioner "local-exec" {
command = "git clone Folder1"
provisioner "local-exec" {
when = destroy
command = "rmdir Folder1 /s /q"
resource "aws_s3_bucket_object" "image_upload" {
key = "image1.png"
bucket = aws_s3_bucket.bucket1.bucket
source = "Folder1/task1image.png"
acl = "public-read"
content_type = "image/png"
depends_on = [aws_s3_bucket.bucket1, null_resource.git_download]

Create a Cloudfront using S3 bucket(which contains images)

Terraform code for CloudFront Distribution

locals {
s3_origin_id = "S3-task1-myimage"
resource "aws_cloudfront_distribution" "s3_distribution" {
//Origin Settings
origin {
domain_name = "${aws_s3_bucket.bucket1.bucket_domain_name}"
origin_id = "${local.s3_origin_id}"}
enabled = true
is_ipv6_enabled = true
restrictions {
geo_restriction {
restriction_type = "none"
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = falsecookies {
forward = "none"
viewer_protocol_policy = "allow-all"}
viewer_certificate {
cloudfront_default_certificate = true

depends_on = [aws_s3_bucket.bucket1]

NOTE: Specify /image1.png or ant object name at end of cloudfront url if you want to view it.

Updating index.html with Cloudfront URL

NOTE: We need a resource for provisioner. Here, we need to connect to ec2 and update the file. So we used a provisioner and for that we need a resource. So we used null_resource

resource "null_resource" "update_link" {
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key.private_key_pem
host = aws_instance.web_ec2.public_ip
port = 22
}provisioner "remote-exec" {
inline = [
"sudo chmod 777 /var/www/html -R",
"sudo echo \"<img src='http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image_upload.key}'>\" >> /var/www/html/index.html",
depends_on = [aws_cloudfront_distribution.s3_distribution]

Checking outputs

In order to get values like instance_id,volume_id etc, we can use outputs.

Here are some which I used.

output "vpc_" {
value =
output "subnet_" {
value =
output "publicip_" {
value = aws_instance.web_ec2.public_ip
output "ec2_" {
value =
output "domainname_" {
value = aws_s3_bucket.bucket1.bucket_domain_name

Final Output

Github link for this project:-

Important practices:-

  1. Run terraform validate after completing the code to check for any errors
  2. Remember null_resource is created first and then other resources are created next in alphabetical order.
  3. So, use depends_on in resources which are dependent to maintain order
  4. Provide unique key_name.
  5. CloudFront distribution takes more time for creating.So dont add any connection block in it.Use null resource for additional blocks and make it dependent on CloudFront distribution.

Thank you Vimal Sir for this amazing task.

To connect with me in linkedin : Click Here

Tech Explorer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store