Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
148 views149 pages

Terraform Mastering Infra Code Code

The document is an e-book by Subhabrata Panda focused on mastering Infrastructure as Code (IaC) using Terraform and AWS. It covers essential topics such as installation, AWS account setup, and various Terraform functionalities, aimed at both beginners and experienced professionals. The author emphasizes making complex infrastructure concepts accessible and actionable for effective cloud-based DevOps practices.

Uploaded by

Hezron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views149 pages

Terraform Mastering Infra Code Code

The document is an e-book by Subhabrata Panda focused on mastering Infrastructure as Code (IaC) using Terraform and AWS. It covers essential topics such as installation, AWS account setup, and various Terraform functionalities, aimed at both beginners and experienced professionals. The author emphasizes making complex infrastructure concepts accessible and actionable for effective cloud-based DevOps practices.

Uploaded by

Hezron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 149

MASTERING INFRASTRUCTURE AS CODE

SUBHABR ATA PANDA

Subhabrata Panda |1
ABOUT ME

I am Subhabrata Panda, a passionate DevOps and Cloud Enthusiast, with a strong foundation in Terraform,
AWS, and cutting-edge cloud technologies.I have consistently demonstrated expertise in deploying scalable,
secure, and efficient infrastructure solutions.

With hands-on experience implementing Infrastructure as Code (IaC) using Terraform, I have successfully
worked on projects that include building secure CI/CD pipelines, managing multi-tier applications, and
optimizing cloud deployments.

This e-book reflects my commitment to making complex infrastructure concepts accessible and actionable.
Whether you are a beginner or an experienced professional, my goal is to provide you with the tools and insights
needed to excel in modern cloud-based DevOps practices.

Feel free to connect with me at [email protected] to discuss ideas, share feedback, or


collaborate on exciting projects!

Subhabrata Panda |2
INDEX

1. PREREQUISITES 7
2. INTRODUCTION 7
3. IAC – INFRASTRUCTURE AS CODE 9
4. I N S TA L L T E R R A F O R M O N W I N D O W S 10
5. I N S TA L L T E R R A F O R M O N M A C / U B U N T U 12
6. CHOOSE THE PROVIDER 13
7. S E T U P AW S A C C O U N T 14
8. AWS DA S H B OA R D 16
9. AWS U S ER S E T UP 17
1 0 . D O W N L O A D AW S O N W I N D O W S 21
1 1 . A W S C L I C O N F I G U R AT I O N F O R V S C O D E 23
1 2 . AWS EC 2 W I T H T ER R A F OR M 26
11.1 AMI
11.2 terraform plan
11.3 terraform apply
11.4 terraform destroy
11.5 terraform destroy -auto—approve
11.6 terraform validate
13. RESOURCE CHANGE 33
1 4 . VA R I A B L E S I N T E R R A F O R M 34
13.1 Syntax
15. OUTPUT IN TERRAFORM 36
16. IMPLEMENT S3 BUCKET WITH THE HELP OF TERRAFORM 37
15.1 Introduction of S3 Bucket

Subhabrata Panda |3
17. RANDOM PROVIDER 40
16.1 Syntax
1 8 . T E R R A F O R M R E M O T E S TAT E M A N A G E M E N T 43
17.1 Keypoints
17.2 Syntax
1 9 . P R O J E C T 1 - D E P L O Y S TAT I C W E B S I T E O N AW S U S I N G S 3 45
BUCKET
18. 1 Reference
1 8 . 2 Wo r k i n g
2 0 . U N D E R S TA N D V P C F O R T E R R A F O R M I M P L E M E N TAT I O N 48
19.1 VPC CIDR Block
19.2 Internet Gateway
19.3 Route Tables
19.4 Security Target Groups
19.5 NACL
19.6 Subnets
19.7 NAT Gateway
19.8 AWS Peering
19.9 Route 53
21. IMPLEMENTING VPC USING TERRAFORM 54
20.1 Introduction
2 2 . D ATA R E S O U R C E 59
21.1 Real Life Scenario
2 3 . C R E AT E E C 2 U S I N G E X I S T I N G V P C 63

Subhabrata Panda |4
2 4 . T E R R A F O R M VA R I A B L E S 64
23.1 Real Life Scenario
23.2 Problem without Validation
23.3 Use of map
23.4 Use of flattern
23.5 Use of lookup
23.6 Environment Variables
23.7 terraform.tfvars
23.8 terraform.auto.tfvars
23.9 Diagram
2 5 . L O C A L VA R I A B L E S I N T E R R A F O R M 73
24.1 Feature
2 6 . T E R R A F O R M : O P E R AT I O N S & E X P R E S S I O N S 74
27. TERRAFORM: FUNCTIONS 75
2 8 . T E R R A F O R M : M U LT I P L E R E S O U R C E S 81
27.1 count
27.2 count.index
27.2 for_each
2 9 . P R O J E C T 2 – AW S I A M M A N A G E M E N T 91
28.1 Introduction
30. TERRAFORM MODULES 101
29.1 Real Life Scenario
29.2 Without the use of modules
29.3 With the use of module
29.4 Implementing VPC using Terraform Module
29.5 Implementing EC2 using Terraform Module
29.6 Building Terraform Module
3 1 . P R E PA R E M O D U L E S T O P U B L I S H 115

Subhabrata Panda |5
32. TERRAFORM DEPENDENCIES 123
33. T ERRA FORM L IFECYCL E 125
32.1 create_before_destroy
32.2 prevent_destroy
32.3 ignore_destroy
32.4 replace_triggered_by
3 4 . P R E & P O S T C O N D I T I O N R E S O U R C E S VA L I D AT I O N S 130
33.1 Syntax for precondition
33.2 Syntax for postcondition
33.3 Examples
33.4 Combined example of precondition and postcondiotn
3 5 . T E R R A F O R M S TAT E M O D I F I C AT I O N S 133
34.1 terraform state list
34.2 terraform state show
34.3 terraform state mv
34.4 terraform state rm
34.5 terraform state pull
34.6 terraform state push
34.7 terraform state
36. TERRAFORM IMPORT COMMANDS 136
3 7 . T E R R A F O R M W O R K S PA C E S 138
35.1 Uses
35.2 Working
35.3 Diagram
38. TERRAFORM CLOUD WITH GITHUB 141

Subhabrata Panda |6
PREREQUISITES
• Must have knowledge of AWS Cloud.

• Must know to handle VS code editor well.

INTRODUCTION
In this book/pdf we will learn all the basic to advance of
the TERRAFORM tool that is mainly used in Devops with
the help of AWS cloud and VS Code editor or Cursor.

Terraform is an open-source Infrastructure as Code


(Iac) tool developed by HashiCorp.

Subhabrata Panda |7
TERRAFORM

Subhabrata Panda |8
Terraform is an open-source infrastructure as a code (IAC) tool,
written in HCL (Hashicorp Config Language) format, which has State
Management feature (terraform.tfstate)where it maintained a detailed
recodes of current state of managed resources.

3. IAC
Tools allow you to manage infrastructure with configuration files rather
than through a graphical user interface.

E.g.
→ You want to host the EC2 instance in AWS cloud, normally what we
will do is to go to
→ dashboard
→setup up the instance step by step graphically

Rather than doing the graphically, write the simply configuration file
that contains the details like
→ to host the instance
→ which os is required
→ what all resources is required for the instance
Then will simply execute the file, then IAC will automatically do all the
things you asked for.

Subhabrata Panda |9
4. INSTALLATION ON TERRAFORM ON WINDOWS

Scroll Down and search for windows section and download binary of
terraform

Extract the files that you have download


Search for “Edit the system environment variable” on windows search
bar it will show something like

Subhabrata Panda | 10
Click on “Environment Variables”

Click on “Path” where the arrow is showing


→then Select New option
→ add the terraform extracted file location then click ok
➔ Then ok → ok

Under windows cmd terminal

Subhabrata Panda | 11
INSTALLATION ON TERRAFORM ON
5. MAC/UBUNTU
MAC
brew tap hashicorp/tap
brew install hashicorp/tap/terraform

Or download binary of Terraform

LINUX
Ubuntu/Debian
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --
dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-
by=/usr/share/keyrings/hashicorp-archive-keyring.gpg]

https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee


/etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install terraform

CentOS/RHEL
sudo yum install -y yum-utils

sudo yum-config-manager --add-repo


https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo

sudo yum -y install terraform

Amazon Linux
sudo yum install -y yum-utils shadow-utils

sudo yum-config-manager --add-repo


https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo

sudo yum -y install terraform

Subhabrata Panda | 12
6. CHOOSE THE PROVIDER

After the terraform is done now we need to choose which provider we


will be working on to see which all provider is available, we need to go
to –

Search for the provider

Subhabrata Panda | 13
7. SETUP AWS ACCOUNT
Go to this account

Fill all the details that the page is asking, all the details must be legal
And at the end it will cost 1USD for verification which will be returned
automatically.

STEP-1 STEP-2

STEP-3 STEP-4

Subhabrata Panda | 14
STEP-5 STEP-6
Select the basic support If this page shows, then you are
good to go

STEP-7
And you will also get the mail
something like this

Subhabrata Panda | 15
8. A W S D A S H B O A R D

The AWS Management Console Dashboard is a user-friendly, web-


based interface for managing AWS services and resources. Key
features include:

- Customizable Interface: Pin frequently used services.


- Resource Monitoring: View EC2, S3, and other resources
immediately.
- Billing Management: Track usage, view bills, and set cost alerts.
- Integrated Tools: Access CloudWatch, CloudFormation, and
IAM.
- Global Management: Manage resources across regions and
zones.

It provides real-time insights, centralized management, and tools for


monitoring, security, and cost optimization, simplifying cloud
operations for all users.

Subhabrata Panda |
16
9. A W S U S E R S E T U P

Now we need to do user setup because we need accessibility from


user as we will be performing action on the local machine for AWS.

So, we will be using service known as IAM (Identity and Access


Manager)

IAM → AWS Identity and Access Management (IAM) is a critical


service that enables you to securely control access to AWS resources.
It provides fine-grained access management for AWS services and
resources, ensuring that only authorized users and applications can
interact with your cloud infrastructure.

IAM is a foundational security service used across all AWS


environments, including services like Amazon EKS, EC2, S3, RDS, and
others.

Go to search bar of AWS and type IAM you will see the dashboard
something like this

Now click on “User” → Give the User name → And Click on “Next”

Subhabrata Panda |
17
Tick on “Provide user access AWS Management console”
→Then tick on “I want to create an IAM user”
→Now Tick and Give the “Custom password”
→For now, untick “User must create a new password at next sign-in”
→ And click on “Next”

Under “Set Permission”

→Select “Add user to group”.


→Click on “Create group”.

Subhabrata Panda |
18
Next this page will appear

→For now, Tick “AdministratiorAccess”


→Click “Create user group”

→Then click on “Next”


→If you want to give tag, then give or click on create user.

If you see this then you are good to go

→remember or copy the “User name”, and “sign-in URL”, “Console


Password”
→Then click on “Return to user list”
Subhabrata Panda | 19
→Now sign-out from the “Root id”

Again, click on “Sign-in to the Console”

STEP -1 STEP-2
This icon will appear →Give the 12digit account

→Give the details and sign in.


→It will sign as a user not a root user

Subhabrata Panda | 20
10. D O W N L O A D I N G A W S C L I O N W I N D O W S

Type aws “cli download” in any browser

Click on the given link

Click on 64-bit and it will automatically download it

After the download, right click on download file it will show

Subhabrata Panda | 21
Just do next→next→ Accept the License Agreement→next→ install

Type “aws” on the windows cmd

If you see this screen means you have successfully installed the aws cli.
Now, we can access AWS cloud platform from a local environment.

Subhabrata Panda | 22
11. A W S C L I C O N F I G U R E I N V S C O D E

All the configuration we will made is in VS code


→Create an empty folder name “Terraform”
→Right click inside Terraform box. To open integrated terminal in VS
code.

Support to check something from the VS code terminal i.e. what all
users that have created we type

→> aws iam list-users

By default, it will show

But in my case, it shows

Subhabrata Panda | 23
NOTE:: - It’s not an error it’s because I have deleted my user manually from
AWS IAM service
Other reasons
→ Invalid or Expired Credentials
→ Incorrect AWS Profile
→ Session Token Expiration (If using MFA)

In either way, you need to type “aws config”

HOW TO GET ALL THE INFORMATION

1.Go to aws cloud and search for IAM service and left-click on tf-user

2.Click on “Create access key”

Subhabrata Panda | 24
Tick on

Tick the confirmation-> click on Next→give the tag name →Create he


access key
➔ Copy access key & Secret access key

Now go to terminal of vs code and type “aws config”

Fill up only “Access key & Secret Access key” and rest all skip by doing
“enter”

Finally, your AWS cloud and local machine has related to each other
with the help of vs code.

Subhabrata Panda | 25
12. A W S E C 2 W I T H T E R R A F O R M

For Starters you need to install “HashiCorp Terraform” extension in vs

code

Now create the Folder name “AWS” inside then create ec2-instance
folder then create main.tf file

Now decide which provider you will be working on as for me it’s “AWS”

Copy this link


https://registry.terraform.io/providers/hashicorp/aws/latest

Subhabrata Panda | 26
You will see

For reference purposes. go to “documentation” that is present before


“use provider”

Now copy all the code that is present under “How to use the provider”
on “main.tf”

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.75.1"
}
}
}

provider "aws" {
# Configuration options
}

Subhabrata Panda | 27
You can change any name with aws like zxd but for readable purpose
we use aws, the only thing that is important is

source = "hashicorp/aws"
version = "5.75.1"

under “# Configuration options” you will now write the configuration

First, you need to specify the region where you will be working.

region= “us-east-1”

provider "aws" {
# Configuration options
region = "us-east-1"
}

Second, as per the requirement we need to create EC2 instance &


EC2 instance is a resource

resource "aws_instance" "name" {


}

Give the name as per your desire for me myec2 in “name”

resource "aws_instance" "myec2" {


}

Now you need to add all the necessary requirements to start the
instance like
“ami”, “instance_type”
Subhabrata Panda | 28
Ami → Amazon Machine Image (AMI) is a pre-configured template
that contains the operating system, application server, and applications
needed to launch an instance in Amazon Elastic Compute Cloud
(EC2).

Steps to get ami id for your desire operating system

Go to aws dashboard and select EC2 or search got EC2 service on


search bar which is just above the “Console Home”

You will see this kind of interface, select “Launch instance”

Subhabrata Panda | 29
You will see this

Select the desired operating system copy the ami id that is marked with
rectangle to add it in terraform.

Instance Type → The hardware configuration of an Amazon EC2


instance, determining its compute, memory, storage, and networking
capacity.
Instance types are categorized into families based on their use cases,
such as general-purpose, computer-optimized, memory-optimized, and
storage-optimized workloads.

Steps to check which instance did you need for your project

To find that just go below “Application and OS Images (Amazon


Machine Image)”

Subhabrata Panda | 30
You will notice Instance type

Now that you have learned both about AMI ID and Instance Type
implement it on terraform in resources

“tag” referred to giving name to ec2 instances


resource "aws_instance" "myec2" {
ami = "ami-0453ec754f44f9a4a"
instance_type = "t2.micro"

tags = {
Name = "Myec2"
}
}

Save the configuration


Now on the terminal of the vs code type the following

IMPORTANT

terraform init → Prepares your working directory for use with


Terraform by downloading necessary plugins and setting up the
environment.

terraform init → it must be applied where .tf file is present

Subhabrata Panda | 31
Before applying terraform init After applying terraform init

terraform plan → Generates and shows an execution plan, detailing


the actions Terraform will take to achieve the desired state. Execution
plan will show on the terminal

terraform apply → Executes the actions from the plan to create,


update, or delete resources in your infrastructure. In this command
Terraform will ask “Do you want to perform this action” you need to
answer the given question either with yes or no only.

if you have done yes, then it will create your based on given instances,
to check go to aws console to verify that

terraform destroy → Deletes all the infrastructure resources defined in


your Terraform configuration. In this command Terraform will ask “Do
you want to perform this action” you need to answer the given question
either with yes or no only.

terraform destroy –auto-approve → Deletes all the infrastructure


resources defined in your Terraform configuration without asking any
final permission to delete it.

terraform validate →Checks the syntax and validity of your


Terraform configuration files without interacting with remote resources.

Subhabrata Panda | 32
13. RESOURCE CHANGE

If you want to make changes to an AWS resource, such as updating the


instance type of an EC2 instance (e.g., from t2.micro to t3.micro), you
need to modify the configuration file first.

Run terraform plan: This will show you the changes Terraform intends
to make, including the update to the EC2 instance type.

Run terraform apply: This will apply the changes and update the EC2
instance to the new instance type.

After running terraform apply, you can check the AWS EC2 Console
to verify that the instance type has been updated from t2.micro to
t3.micro.

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 33
14. VARIABLES IN TERRAFORM

In Terraform, variables let you reuse and customize your


configurations by allowing you to pass values into them when running
Terraform.

Syntax:
variable "variable_name" {
type = string # Type: string, number, bool, list, map, etc.
default = "value" # Optional default value
description = "Description of the variable"
}
"variable_name" can be your any desire name

It’s the best practice to create another files name “variables.tf” to store
all the variables in one place
e.g.

Subhabrata Panda | 34
main.tf variables.tf

Subhabrata Panda | 35
15. OUTPUT IN TERRAFORM

In Terraform, output variables are used to display information about


your resources or configuration after running terraform apply. They
help you extract and use values, such as resource IDs or IP addresses,
for other tasks.

It’s the best practice to create another files name “outputs.tf” to store
all the outputs in one place
e.g.

Outputs.tf

You will see something like this

Subhabrata Panda | 36
16. IMPLEMENT S3 BUCKET WITH THE HELP OF
TERRAFORM

S3 Bucket is a logical container used to store and organize objects


(files) in Amazon Simple Storage Service (Amazon S3). It acts as a
root directory where data is stored securely and can be accessed,
managed, and retrieved.

Key features:
• Stores virtually unlimited amounts of data.
• Stores data as objects that consist of key, value, metadata.
• Must have unique bucket’s name across all AWS accounts
globally.
• Store and retrieve backup data
• Region-Specific to reduce latency
• Versioning, object locking, and MFA delete enhance security.

First create one folder inside AWS name “s3-bucket” inside it create
blank files names
• main.tf
• variables.tf
• outputs.tf
• mydata.txt →to upload data in s3 bucket

variables.tf

Subhabrata Panda | 37
main.tf

outputs.tf

Now type “terraform init” cmd in vscode terminal to initial terraform


in s3-bucket folder

Subhabrata Panda | 38
Type “terraform plan” cmd in vscode terminal to check what all
configurations terraform will do.
Type “terraform plan” cmd in vscode terminal

If you see this output

Then you good to go


To verify whether its created or not, go to S3 service in AWS console

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 39
17. RANDOM PROVIDER

The Random provider in Terraform is used to generate random


values, such as strings, numbers, or pet names, which can be used in
configurations. This is helpful when you need unique identifiers for
resources.

In this demo we will use s3-bucket folder to learn “Random provider”

For reference
https://registry.terraform.io/providers/hashicorp/random/latest/docs

The provider syntax is


terraform {
required_providers {
random = {
source = "hashicorp/random"
version = "3.6.3"
}
}
}

Subhabrata Panda | 40
Changes in main.tf

Without the use of random With the use of random provider


provider

Here you will notice resource with ‘rand_id” name

resource "random_id" "rand_id" {


byte_length = 10
}

In place of “random_id” it can be

Subhabrata Panda | 41
“byte_length” can be any number depending of your choice

Here “${…}” sign is used because we are referring from somewhere

In place of “dec” we can use the following

There will be no change in “variables.tf” and “outputs.tf” configurations

terraform init → terraform plan → terraform apply

Then you good to go


To verify whether its created or not, go to S3 service in AWS console

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 42
18. TERRAFORM REMOTE STATE MANAGEMENT

As we all know terraform maintain current state of the infrastructure in


the file “terraform.tfstate”

Key Points:
→ Purpose: It helps Terraform determine what changes need to be
applied by comparing the actual state of resources with the desired
state in your configuration.
→ Format: The file is in JSON format and includes resource metadata,
attributes, and dependencies.

→ Location: By default, it is saved in the working directory where


Terraform is run.

Remote State: To collaborate securely, the state file can be stored


remotely (e.g., in S3 or a Terraform Cloud workspace).

In this we will store “terraform.tfstate” in created S3 bucket for


collaboration, security, and consistency in infrastructure management
when working with Terraform.

Syntax
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "state/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-lock-table" # For locking
encrypt = true # Encrypt state file
}
}

In this we will create a new folder name “terraform-backend” where we


will create “main.tf”

Subhabrata Panda | 43
Main reason to use S3 bucket is that if changes are made in the
“terraform.tfstate” it automatically makes changes in S3 bucket

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 44
19. PROJECT:DEPLOY STATIC WEBSITE ON AWS
USING S3

To make this project successful we need a simple project with


- index.html
- style.css
- script.js

and terraform files like


- provider.tf
- main.tf
- variables.tf
- outputs.tf

For reference:
- aws_s3_bucket_public_access_block --Terraform
- Setting permissions for website access --AWS
- aws_s3_bucket_policy --Terraform
- aws_s3_bucket_versioning --Terraform
- aws_s3_bucket_website_configuration --Terraform
-

To host the static website in S3 bucket you must


“untick” Block all public access
resource "aws_s3_bucket_public_access_block" "example" {
bucket = aws_s3_bucket.S3_bucket.id

block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}

Subhabrata Panda | 45
To “add a bucket policy” →To make the objects in your bucket publicly
readable, you must write a bucket policy that grants everyone
s3:GetObject permission.
resource "aws_s3_bucket_policy"
"allow_access_from_another_account" {
bucket = aws_s3_bucket.S3_bucket.id
policy = jsonencode(
{
Version = "2012-10-17",
Statement = [
{
Sid = "PublicReadGetObject",
Effect = "Allow",
Principal = "*",
Action = "s3:GetObject",
Resource = "arn:aws:s3:::${aws_s3_bucket.S3_bucket.id}/*"
}
]
}
)
}

The jsonencode function in Terraform is used to convert a data


structure written in HCL (HashiCorp Configuration Language) into a
JSON-formatted string.

“Unable” Bucket Versioning


resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.S3_bucket.id
versioning_configuration {
status = "Enabled"
}
}

Subhabrata Panda | 46
The aws_s3_bucket_website_configuration resource in Terraform
is used to configure static website hosting for an S3 bucket. This
resource specifies the settings like the index document, error
document, and routing rules for the website.

rresource "aws_s3_bucket_website_configuration" "site_hosting" {


bucket = aws_s3_bucket.S3_bucket.id

index_document {
suffix = "index.html"
}
}

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 47
20. U N D E R S T A N D I N G V P C F O R T E R R A F O R M
IMPLEMENTATION

A Virtual Private Cloud (VPC) is a logically isolated network in AWS,


where you can launch resources securely.

VPC CIDR BLOCK


When you create a VPC, you specify a CIDR block that defines the IP
address range for the entire VPC.

CIDR: (Classless Inter-Domain Routing) is a method for allocating


address and routing Internet Protocol (IP) packets.

CIDR Block Allocation:


You specify a range of Ip address (CIDR block) within the VPC’s IP
address range for the subnet.

This determines the pool of IP addresses available for instances in the


subnet .

For e.g. 10.0.0.0/16


This block or IP allows for 65,536 IO address (but 65,531 usable
address)

KEYWORDS
NAT gateway, Internet Gateway, subnets (private or public), Load
Balancer, NACL, Security group, Route table, VPC Peering

Subhabrata Panda | 48
NAT – Network Address Translation
NACL- Network Access Control List
ICMP- Internet Control Message Protocol

INTERNET GATEWAY
A gate that allows you to connect VPC to the Internet, only applicable to
the instance data present in public subnet.

LOAD BALANCER
Forward the request depending upon the load. Basically, it is connected
to the public subnet.
OR
Distributes incoming traffic across multiple targets (e.g., EC2 instances)
to improve availability and reliability.

ROUTER TABLE
A path that connects the load balancer of the public subnet to the
application or instance of the private subnet.
A set of rules that determine where network traffic is directed.
Each subnet in a VPC has its own route table that controls traffic flow
between subnets.

SECURITY GROUPS
The first layer of security of any EC2 instances i.e. attached to the
instances, which tell the instance with Ip address or port will be used to
give access to the instance.

Types of Security group


Inbound traffic Internet to instances
Outbound traffic Instance to internet

NACL- NETWORK ADDRESS TRANSLATION


The second layer of security that comes after security groups that are
attached to the subnets
Difference between
Security group NACL
Subhabrata Panda | 49
Serves at the instance level Serves at subnet level
Permit rules only Permit and deny rules
All rules examined first Rules processed until matched

SUBNETS
The VPC is created with Ip address range, splitting the Ip address for
sub projects.

Inside the subnet the ec2 instances are created.

There are two types of subnets i.e.

Private subnet →Instances cannot be connected to internet-by-internet


gateway, instead they user NAT gateway.

Public subnet →Instances that is present inside this subnet is accessible


to internet through internet gateway by route table.

NAT GATEWAY
- Used for private subnet.
- Helps to mask Ip address.
- Helps to download some resources from internet while doing that
it will mask or change the Ip address with the public Ip address

Subhabrata Panda | 50
either from the load balancer (SNAT) or from the router (NAT
gateway)

AWS PEERING
-Stable connection between 2 VPC either on same account or two
different accounts.

-Check if the peering is working or not use ping (ICMP)

Steps for Route Table Updates:

Create a VPC Peering Connection (if not already done) between the
two VPCs, as explained earlier.

Update Public Subnet Route Tables: Go to the Route Tables


section in the AWS VPC console.
- For each public subnet in VPC 1:
- Add a route:
Subhabrata Panda | 51
o Destination: The CIDR block of VPC 2 (e.g., 10.2.0.0/16).
o Target: The VPC Peering Connection ID.

- Repeat the same for the public subnets in VPC 2, adding routes to
the CIDR block of VPC 1.

Security Group Updates:


- Modify the security groups of instances in both VPCs to allow
traffic from the other VPC's CIDR block (e.g., 10.1.0.0/16 and
10.2.0.0/16).
- Add rules for required protocols (e.g., SSH, HTTP, or custom).

Test Connectivity:
- Launch instances in the respective subnets (public/private) of both
VPCs and verify connectivity using tools like ping or curl.

Route53
- Provides DNS as a service.
- DNS – Domain Name Services
- Performs health check on web server.
- Domain Registration → hosted zones

Subhabrata Panda | 52
Subhabrata Panda | 53
21. I M P L E M E N T I N G V P C U S I N G T E R R A F O R M

We will be implementing this diagram in terraform

We will be using services like


• VPC
• Availability zone
• Subnets
• Internet gateway
• Network ACL
• Network ACL association
• Route table
• Route table association
• NAT gateway
• Elastic IP NAT gateway

For this implementation we will create new folder name “vpc” that
contains: -
• main.tf
• outputs.tf
• provider.tf

Subhabrata Panda | 54
provider.tf

main.tf

Subhabrata Panda | 55
Subhabrata Panda | 56
Subhabrata Panda | 57
Outputs.tf

Output

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 58
22. D A T A S O U R C E

It allows to fetch and use information from


• external sources or
• existing resources within your cloud infrastructure
• They are read-only and useful for referencing or integrating with
existing infrastructure.

Useful for obtaining dynamic data that you need for your configurations.

Real-Life Scenario:
Your company already has a production VPC and subnets set up, and
you want to deploy a new application into an existing subnet without
modifying the existing infrastructure. Instead of hardcoding VPC and
subnet IDs, you use Terraform data sources to dynamically fetch this
information.

NOTE: we are only doing “terraform plan” not “terraform apply” because it
may cost you, do it at your own risk.

Create a new folder in “AWS” i.e. “tf-data-resource” inside create


another file name “main.tf”

Subhabrata Panda | 59
After applying “terraform plan” in the terminal, you will see this error

It explains:
• The AMI query is too broad, leading to multiple AMIs matching
the criteria.
• Terraform doesn't know which AMI to select because no specific
filtering or sorting is provided.

To fix the error we just need to make changes in

After appling “terraform plan” in the terminal,

Subhabrata Panda | 60
If you want to check if it’s correct or not we will verify it in AWS console
then go to AWS console → Type AMI in search bar → click on “AMI
Catalog”

This page will appear and type “ami-0b5268083787b7af7”

You will see

Subhabrata Panda | 61
To s e e t h e ava i l a b i l i t y z o n e s o f p a r t i c u l a r z o n e

To get the account details

Subhabrata Panda | 62
23. C R E A T E E C 2 U S I N G E X I S T I N G V P C

To create an EC Instance with an existing


• VPC
• Private subnet
• Security-group
main.tf
STEP-1 STEP-2

STEP-3 STEP-4

Output

Subhabrata Panda | 63
24. T E R R A F O R M V A R I A B L E S

In this we will create a new folder name “tf-variables” with files


“main.tf” and “variables.tf”

main.tf

Real-World Scenario: Multi-Environment Deployment

Suppose you are tasked with deploying EC2 instances in multiple


environments like development, staging, and production. Each
environment requires:

Different instance types (t2.micro for dev, t2.medium for staging,


m5.large for production).
Unique tags to identify the environment ("Name = dev-instance",
"Name = staging-instance", etc.).
Different AMI IDs based on the environment.

Subhabrata Panda | 64
If you hardcode these values in your main.tf file for each environment,
you will need to duplicate and edit the file multiple times, leading to
redundancy and error-prone configurations

main.tf variables.tf

In this variable we are giving ami a default value that is "ami-


0453ec754f44f9a4a"

In this variable we need to give information in the terminal which


looks like this

Subhabrata Panda | 65
But if you try to give the wrong answer something like t2.mic which is
not part of ec2 instance type, terminal will take the output but when
you try to do “terraform apply” it will show error.

Problem Without Validation:

If a team member accidentally sets an unsupported instance type like


m5.large, Terraform will proceed and deploy the instance. This can
result in:
• Increased Costs: Deploying an expensive instance
unnecessarily.
• Deployment Failures: If the specified instance type is not
supported in the target AWS region.

If you use validation in this case, you only option will be to choose
either “t2.nano” or “t2.micro” if you type anything except these two it
will show

Subhabrata Panda | 66
If block have multiple variables in main.tf

Then in spite of writing in this in variables.tf

We can write
variables.tf main.tf

Subhabrata Panda | 67
Use of map in Terraform:
In Terraform, a map is a data structure that allows you to define and
access related data efficiently. It's particularly useful for organizing and
managing configurations when you need to group related values.

variables.tf main.tf

In this including “e2-instance” tags there can be any number of tags as


we are using map.

Use of flatten in Terraform:


flatten combines nested lists into a single flat list.
It removes any nested layers.

Use Case: Simplify a list of lists.

Subhabrata Panda | 68
Use of lookup in Terraform:
lookup retrieves a value from a map based on a key.
If the key is not found, you can provide a default value.

Use Case: Access values from a map safely.

Subhabrata Panda | 69
ENVIROMENT VARIABLES

Whenever you do “terraform apply” it always asks “


” despite typing input everytime we will be setting environment
variables by writing given command in the terminal

Syntax: -
“export TF_VAR_key=value”

Example:-
“export TF_VAR_aws_instance_type=t3.micro”

If we again try to do either “terraform plan” or “terraform apply” it will


directly execute without asking the input.

We can also change t3.micro to t3.nano we can rewrite the command


“export TF_VAR_aws_instance_type=t3.mano”

terraform.tfvars

The terraform.tfvars file is used to assign values to the variables


declared in variables.tf.

It helps separate configuration from implementation, making the code


reusable and modular.
variables.tf terraform.tfvars
Defines the variables Terraform Provides values for the defined
expects. variables.
Declarative (defines variable Assigns actual values to variables.
schema).
Declares variable "region" {}. Assigns region = "us-west-2"

We have to create a new file name “terraform.tfvars”

Subhabrata Panda | 70
variables.tf terraform.tfvars

terraform.auto.tfvars

It is a special file in Terraform used to automatically assign values to


variables.
Terraform automatically loads this file if it exists in the working
directory.
It works similarly to terraform.tfvars, but with a key difference:
Terraform does not require you to explicitly specify this file during
execution.

terraform.tfvars Terraform.auto.tfvars

Subhabrata Panda | 71
Output before having Output after having
“terraform.auto.tf.vars” after “terraform.auto.tf.vars” after
applying “terraform plan” applying “terraform plan”

It means “terraform.auto.tfvars” have more parity than


“terraform.tfvars” it means that when we apply “terraform plan” in the
terminal it first check the configurations of “terraform.auto.tfvars” then
“”terraform.tf.vars

We also write
in the
terminal

Subhabrata Panda | 72
25. L O C A L V A R I A B L E S I N T E R R A F O R M

Used to simplify and organize complex configurations by defining


intermediate values within a module. They allow you to create reusable
and readable logic without polluting your input variables or hardcoding
values.

Key Features of Local Variables

Defined with locals Block:


• Local variables are declared within a locals block.
• Example:

Accessed Using local.<name>:


• You can reference local variables using the local namespace.
• Example:

Evaluated Dynamically:
• Local variables can be used to compute values dynamically, often
combining other inputs, resources, or expressions.

Scoped to the Module:


• They are only available within the module where they are defined,
making them ideal for encapsulating logic.

Subhabrata Panda | 73
26. T E R R A F O R M : O P E R A T I O N S & E X P R E S S I O N S

To explain Operations and Expressions we will create another folder “tf-


operators-expressions” inside that we will create another file “main.tf”

Subhabrata Panda | 74
Subhabrata Panda | 75
Subhabrata Panda | 76
27. T E R R A F O R M : F U N C T I O N S

To explain Operations and Expressions we will create another folder “tf-


operators-expressions” inside that we will create another file “main.tf”

Subhabrata Panda | 77
Subhabrata Panda | 78
Subhabrata Panda | 79
Subhabrata Panda | 80
28. T E R R A F O R M : M U L T I P L E R E S O U R C E S

In this we will learn about the use of “count”

The count parameter in Terraform allows you to create multiple


instances of a resource or module by specifying the desired number of
instances dynamically.
It provides a simple way to scale resources programmatically and
reduces code duplication.

For the first demo we will be creating a VPC with two subnets using
count in terraform

Int order create this demo we will create new folder name “tf-multiple-
resources” in AWS folder inside that we will be creating “main.tf” in
“main.tf” file there are subdivision

main.tf

Subhabrata Panda | 81
count
• The count parameter is set to 2, which means Terraform will
create two instances of the aws_subnet resource.
• Each instance will have unique properties based on the use of
count.index.

count.index
• The count.index is a zero-based index representing the
instance number of the resource being created.
• Since count = 2, the value of count.index will be:
o 0 for the first subnet instance.
o 1 for the second subnet instance.

Subhabrata Panda | 82
Now type “terraform init” → “terraform plan” → “terraform
apply”

Now go to aws console to see the result

For the second demo we will be creating a VPC with two subnets and
four instances using count in terraform

Subhabrata Panda | 83
Continuations of above infrastructure of the demo 1 project

Key Components:
• aws_subnet.tf_vpc_subnet[*].id:
o Retrieves the list of all subnet IDs created by the
aws_subnet.tf_vpc_subnet resource.

• length(aws_subnet.tf_vpc_subnet):
o Returns the total number of subnets (e.g., 2 subnets in the
earlier example).

• count.index % length(aws_subnet.tf_vpc_subnet):
o Distributes the instances evenly across the available subnets
using modulo operation.

Subhabrata Panda | 84
o This ensures that the subnet index cycles through the
available subnets (e.g., 0, 1, 0, 1 for 2 subnets).

• element(...):
o Retrieves the subnet ID at the calculated index.

Now type “terraform plan” → “terraform apply”

Now go to aws console to see the result

Subhabrata Panda | 85
For the third demo we will be creating

First create variables.tf

Second create terraform.tfvars

Third continuations of above infrastructure of the demo 2 project but


slight in resource "aws_instance"

Subhabrata Panda | 86
Now type “terraform plan” → “terraform apply”

Now go to aws console to see the result

Subhabrata Panda | 87
Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Use of “for_each”.
“for_each” → only accepts “sets” and “map”
In order to learn the use of “for_each” we will have to do a slight
changes in “terraform.vars” , “terraform.tfvars” and “main.tf”
specially in resource "aws_instance" rest all will remain same.

We will do changes in demo 3 project

terraform.vars

terraform.tfvars

Subhabrata Panda | 88
main.tf

subnet_id = element( aws_subnet.tf_vpc_subnet[*].id, index(


keys(var.ec2_instance_ami_map), each.key ) %
length(aws_subnet.tf_vpc_subnet) )

Explanation of subnet_id:

aws_subnet.tf_vpc_subnet[*].id:
- This retrieves a list of all subnet IDs associated with the
aws_subnet.tf_vpc_subnet resource.

keys(var.ec2_instance_ami_map):
- Extracts all the keys from the variable ec2_instance_ami_map,
which is a map defining the AMI for each instance.

each.key:
- Refers to the current key being processed in the for_each block,
which corresponds to one EC2 instance in the
ec2_instance_ami_map.

index(keys(var.ec2_instance_ami_map), each.key):
- Determines the index of the current key (each.key) in the list of
keys extracted from the map.

length(aws_subnet.tf_vpc_subnet):
- Calculates the total number of subnets available in the
aws_subnet.tf_vpc_subnet resource.

index(...) % length(...):
- This modulus operation ensures that the index cycles through
the list of subnets. For example:
Subhabrata Panda | 89
- If there are 3 subnets and 5 instances, the instance-to-subnet
mapping would rotate (e.g., 0 -> Subnet1, 1 -> Subnet2, 2 ->
Subnet3, 3 -> Subnet1, 4 -> Subnet2).
-
element(...):
- Picks the subnet ID from the list of subnet IDs
(aws_subnet.tf_vpc_subnet[*].id) based on the calculated index.

Now type “terraform plan” → “terraform apply”

Now go to aws console to see the result

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 90
29. P R O J E C T : A W S I A M M A N A G E M E N T

In this project you must have the knowledge of


• IAM services (users, policy, groups, Roles)
• Terraform
• Yaml configuration

In this project, we will use Terraform to provision AWS infrastructure


and assign IAM policies to a group of users using AWS Identity and
Access Management (IAM) service.
• Provide user and roles info via YAML file.
• Read the YAML file and process data.
• Create IAM user.
• Generate Passwords for the users
• Attract policy/roles to each user

In this project we will be implementing this diagram:

Create a new folder in “AWS” name “iam-management”, inside that


folder create files like
• “user.yaml” for storing details of the users,groups, policies,
• “main.tf” for writing the infrastructure for IAM AWS services
• “output.tf” for storing output details

Subhabrata Panda | 91
In “user.yaml”
STEP-1 STEP-2

Subhabrata Panda | 92
STEP-3

In “main.tf”

Provider

Subhabrata Panda | 93
locals
{

Subhabrata Panda | 94

Subhabrata Panda | 95

# Create IAM users for all team members

# Generate AWS access keys for all users

# Set up console access for users with initial password


requirements

Subhabrata Panda | 96
# Configure organization-wide password policy settings

# Create IAM groups for different team roles

# Assign users to their respective IAM groups

# Attach AWS managed policies to IAM groups

# Attach AWS managed policies directly to users

Subhabrata Panda | 97
In “output.tf”

Subhabrata Panda | 98
After the implementation of “terraform apply” we will see these
outputs in the AWS IAM console

Users console

Subhabrata Panda | 99
User group console

You can log in to any user account to verify its functionality. However,
it is the admin's responsibility to assign the account number,
username, and password.
After logging in with the password provided by the admin, AWS will
prompt you to change the password to one of your choices. If you
prefer not to change it, you can skip this step.

Note – if all work is done you should use command terraform destroy to
delete all the services you have used while doing project.

Subhabrata Panda | 100


30. T E R R A F O R M M O D U L E S

• Modules are containers for multiple resources that are used


together.
• A module consists of the collection of .tf and/or .tf.json files kept
together in a directory.
• Modules are the main way to package and reuse resource
configurations with terraform.

Real-Life Scenario: Deploying an AWS VPC


Imagine you need to deploy multiple VPCs (for dev, staging, and prod
environments) with the same structure but different configurations (e.g.,
CIDR blocks, subnets).

Steps Without Modules


You would have to write the same Terraform code repeatedly for each
environment, making it harder to maintain and prone to errors.

Solution With Modules


You can create a VPC module with reusable code and call it for each
environment with different parameters.

Structure of the minimal module

README.md:
A text file that provides documentation for the module.
Explains the module's purpose, usage, required inputs, and outputs.

Subhabrata Panda | 101


main.tf:
The core Terraform configuration file.
Defines the resources, data sources, and logic for the module.

variables.tf:
Specifies the inputs the module needs.
Defines variables with types, descriptions, and default values.

outputs.tf:
Declares the outputs from the module.
Shares resource information (like IDs, IPs) with the root module or other
modules.

This structure ensures that the module is self-contained, reusable, and


easy to understand. Let me know if you'd like a detailed explanation for
any of these files!

To get the proper modules go to “https://registry.terraform.io” you will


see the something like this:

Now click on modules you will get to see something like this:

Subhabrata Panda | 102


By selecting the AWS provider, you can access a wide range of modules
tailored to various AWS services. These modules are organized with
user-friendly names, making it easier to identify and choose the module
that best fits your requirements.

Implementation of VPC using terraform modules

Let’s implement a VPC using Terraform modules. We’ll create a new


directory named “tf-module-vpc”. Inside this directory, we’ll create a
file called compute.tf. Next, search for the AWS VPC module in the
Terraform Registry. You’ll find a module that looks something like this:

Subhabrata Panda | 103


Now copy the instruction that is given in “Provision instructions” in
compute.tf

Now type “terraform init” in termial, then you will notice something like:

Go through “main.tf”, “outputs.tf”,


“variables.tf”, “versions.tf”, “vpc-flow-logs.tf”
Subhabrata Panda | 104
To understand how to use the module effectively, read the
documentation thoroughly. Navigate to the required module in the
Terraform Registry and scroll down. You'll find key sections like
"Readme", "Inputs", "Outputs", "Dependencies", and
"Resources". These sections provide detailed information about the
module's usage, configuration, required variables, outputs, and
dependencies.

n the "Inputs" section of the Terraform Registry, you’ll see all the
variables defined in the module's variables.tf file. These variables
automatically appear there for easy reference.

If you want to explore the infrastructure code directly, click on the


"Source Code" link in the documentation. This will take you to the
module's GitHub repository, where you can view all the details related
to the AWS VPC provider.

When a module is implemented, it needs to know which AWS region to


deploy the infrastructure in. To specify this, you provide the AWS region
in the provider block or as a variable in your Terraform configuration.

Subhabrata Panda | 105


For further configuration do follow readme file of the module

Final configuration of “compute.tf”

output

Subhabrata Panda | 106


Implementation of EC2 instance using terraform modules

Now, we will create a new file named instance.tf inside the tf-module-
vpc directory.

This file will be used to define EC2 instances.


To find the module for deploying EC2 instances, search for terraform-
aws-modules/ec2-instance in the Terraform Registry.
Use the same process you followed to find the VPC module:
1. Go to the Terraform Registry.

2. Search for terraform-aws-modules/ec2-instance.

3. Review the module documentation for "Readme", "Inputs",


"Outputs", "Dependencies", and "Resources" sections to
understand its configuration and usage.

In “instance.tf”

VPC Security Group and Subnet:

• vpc_security_group_ids: Associates the instance with the default


security group from the VPC module (defined as
module.vpc.default_security_group_id).

Subhabrata Panda | 107


• subnet_id: Places the EC2 instance in the first public subnet
created by the VPC module (module.vpc.public_subnets[0]).

Output

NOTE: we are only doing “terraform plan” not “terraform apply” because it
may cost you, do it at your own risk.

Building own terraform modules

Building your own Terraform modules involves creating reusable


configurations to manage infrastructure more efficiently.
These modules group related resources together, making your code
modular, scalable, and easy to maintain.
Custom modules are especially useful when standardizing
infrastructure across multiple environments (e.g., dev, staging, prod).

A modules typically includes:


• main.tf: Defines resources.
• variables.tf: Specifies input variables.
• outputs.tf: Declares output values.
README.md: Helps users understand the module's purpose, how to
use it.

Subhabrata Panda | 108


Real-Life Scenario:

Scenario: A company needs to deploy VPCs with consistent


configurations, so we need this all things in “tf-own-module”

Requirements:
• Accept cidr_block from user to create VPC
• User can create multiple subnets
→ Get CIDR block for subnets from user
→ Get AZS (availability zone)
→ User can mark a subnet as public (default is private)
If public, create IGW
Associate public subnet with Routing table

We will create the following VPC configuration by developing a custom


Terraform module.

Where all the subnets i.e. public or private is attached to Route table (
) and this route table is attached to internet gateway

The file structure will be as follows:

Subhabrata Panda | 109


In tf-own-module-vpc/modules/vpc/version.tf

In tf-own-module-vpc/modules/vpc/variables.tf

Subhabrata Panda | 110


In tf-own-module-vpc/modules/vpc/main.tf

Subhabrata Panda | 111


In tf-own-module-vpc/modules/vpc/output.tf

Subhabrata Panda | 112


In tf-own-module-vpc/ vpc-implementation-as-root.tf

In tf-own-module-vpc/ vpc-output-as-root.tf

Subhabrata Panda | 113


When you do “terraform init”→ “terraform plan” → “terraform apply”
output

NOTE: we are only doing “terraform plan” not “terraform apply” because it
may cost you, do it at your own risk.

Subhabrata Panda | 114


31. P R E P A R E M O D U L E F O R P U B L I S H
Creating your own Terraform module simplifies infrastructure
management by enabling reusability, consistency, and scalability.

It allows you to define resources once and reuse them across projects,
ensuring uniform configurations, reducing duplication, and making
collaboration easier.

• Readme.md file
• LICENSE
• Example
• Push the code in GitHub
• Terraform Registry

To make this section successful we will take “tf-own-module” and add


some extra files and directory.

This is a previous folder structure

Subhabrata Panda | 115


In In tf-own-module-vpc/modules/vpc/ README.md

Subhabrata Panda | 116


The code provided in the "Usage" section is extracted from tf-own-
module-vpc/vpc-implementation-as-root.tf, excluding the provider
section.

Add the necessary LICENSE file, which you can generate while creating
the repository on GitHub.

Now, create a new folder named examples inside the tf-own-module-


vpc directory. Within the examples folder, create another subfolder
named complete. Inside the complete folder, replicate all the root-level
files, such as README.md, main.tf and output.tf.

Copy the contents of tf-own-module-vpc/vpc-implementation-as-


root.tf to tf-own-module-vpc/examples/complete/main.tf

Copy the contents of tf-own-module-vpc/ vpc-output-as-root.tf to tf-


own-module-vpc/examples/complete/outputs.tf

Now this is the final folder structure:

Create a new folder named GitHub-Code-Publisher and copy only the


modules and examples directories from tf-own-module-vpc into it.

Subhabrata Panda | 117


Log in to your GitHub account, click on the New button, and create a
new repository.

Then you will this this layout

Give the repo name in proper way something like “terraform-aws-subha-


vpc” and give the MIT license then click on “create repository”

Then you will see this layout

Then click on code and copy the like on the vs code terminal by adding
git clone before it
Subhabrata Panda | 118
Now you will see this file structure

Move all files from the modules/vpc folder (e.g., main.tf, outputs.tf,
variables.tf, versions.tf) to the root level of the repository, just outside
the modules/vpc directory.

Delete the now-empty modules/vpc folder.

Copy both the files and the examples folder into the terraform-aws-
subha-vpc directory.

This is the new file structure after editing

Subhabrata Panda | 119


Now add the files and folder to GitHub. After you have added the files
and folder you will see this type of layout in GitHub.

Now add the tag

Now go to terraform registry click on sign in option.

Subhabrata Panda | 120


Now this page will appear

Now go to main page, select “publish” and click on module

Subhabrata Panda | 121


This page will appear

Agree to terms and conditions, then “PUBLISH MODULE”

If all the above steps are correct, then you will see this output in
terraform module

If you want to use this modules as configuration then please follow


“Implementation of EC2 instance using terraform modules”

Subhabrata Panda | 122


32. T E R R A F O R M D E P E N D E N C Y

Terraform dependencies ensure resources are created, modified, or


destroyed in the correct order by defining relationships between them.
Dependencies help manage resource interconnections, avoiding errors
during provisioning or destruction.

Key Uses:
1. Automatic Ordering: Terraform understands which resources
need to exist before others.
o Example: A subnet must be created before launching an EC2
instance in it.

2. Error Prevention: Ensures dependent resources are not


destroyed or modified prematurely.
o Example: Deleting an S3 bucket ensures objects within it are
removed first.

3. Custom Dependencies: Explicitly define dependencies using


depends_on when Terraform cannot infer relationships.

We will now create a new folder called tf-dependencies. Inside this


folder name “AWS”, we will add a file named main.tf, which will serve
as the entry point for defining the required Terraform configurations
related to module dependencies.

Subhabrata Panda | 123


Before using “depends_on” in After using “depends_on” in
aws_instance resources aws_instance resources

Output: → after Appling terraform Output: → after Appling


init→ terraform plan →terrafrom terraform init→ terraform plan
apply →terrafrom apply

In this both In this first it will create


“aws_security_group.main” and “aws_security_group.main” then
“aws_instance.main” are being it will create
created simuntaneously “aws_instance.main”

Subhabrata Panda | 124


33. R E S O U R C E S L I F E C Y C L E
The lifecycle block in Terraform is used to manage specific behaviors
of resources, such as preventing their accidental deletion, ignoring
certain changes, or customizing resource creation and destruction
processes.

Key Arguments in the lifecycle block


create_before_destroy

Ensures that when a resource is replaced (due to changes in


configuration), the new resource is created before the old one is
destroyed.

Useful for resources that cannot have downtime, such as load balancers
or production systems.

Aspect create_before_destroy = true create_before_destroy = false


Order of Create first, destroy later Destroy first, create later
Actions
Downtime No downtime Downtime possible
Resource May require sufficient Fewer resource
Dependency resources (e.g., IPs, requirements.
capacity) for two resources
to coexist temporarily.
Usage Production-critical Non-critical resources
resources where downtime is
acceptable.
Things to Consider

Subhabrata Panda | 125


• Cost: Creating a new resource before destroying the old one might
temporarily increase costs (e.g., running two EC2 instances
simultaneously).

• Resource Limits: Some AWS services have quotas (e.g., a limited


number of EC2 instances or IP addresses per region). Ensure your
account can support two instances temporarily.

• Dependencies: If other resources depend on the old instance,


they might need to be updated after the replacement process.

prevent_destroy
Prevents a resource from being destroyed, even if a terraform destroy
command is run or the resource is removed from the configuration.

Useful for critical resources like databases or production VPCs.

If you try to destroy the resource, Terraform will throw an error unless
you explicitly disable this protection.

Aspect prevent_destroy = true prevent_destroy = false


Behavior on Prevents resource from Allows resource to be
Destroy being destroyed destroyed
Error on Terraform fails with an Terraform proceeds
Destroy error without error
Use Case Critical resources Non-critical or temporary
(databases, VPCs) resources
Override Must explicitly disable to No override needed
Requirement destroy

Things to Consider
Subhabrata Panda | 126
1. Critical Resources: Use prevent_destroy for critical resources
like:
o Databases (to prevent data loss).
o VPCs or subnets (to avoid breaking the network).
o Persistent storage like S3 buckets or EBS volumes.

2. Testing Environments: Avoid using prevent_destroy in non-


critical environments where resources are often created and
destroyed (e.g., development or testing).

3. Force Destroy: To override prevent_destroy, you can temporarily


remove the argument, apply the changes, and then re-add it.

ignore_changes

Tells Terraform to ignore updates to specific attributes of a resource,


even if they are changed outside of Terraform (e.g., manually or by
another process).

Subhabrata Panda | 127


Aspect With ignore_changes Without
ignore_changes
Manual Changes Ignored for specified Overwritten by
attributes Terraform
Configuration Ignored for specified Applied as part of the
Updates attributes next plan/apply
Use Case When certain When Terraform
attributes are should fully control
managed externally the resource

Things to Consider
1. Dynamic Values: Use ignore_changes for attributes that are
frequently updated dynamically by external systems (e.g., tags,
user_data, or IAM policies).
2. Critical Updates: Avoid using ignore_changes for critical
attributes (e.g., instance_type, cidr_block) to ensure Terraform
manages them properly.
3. Fine-Tuning: You can ignore specific attributes rather than
ignoring all changes, making it a granular control mechanism.

replace_triggered_by

The replace_triggered_by argument in the Terraform lifecycle block


is used to trigger the replacement of a resource when changes occur to
specified dependencies or attributes outside of its direct configuration.

How It Works:
• Purpose: It defines specific dependencies that, when changed, will
cause the resource to be destroyed and recreated.
• Use Case: This is useful when a resource must be replaced if a
related resource or attribute changes, even if the resource itself
hasn’t directly changed in the configuration.

Subhabrata Panda | 128


Subhabrata Panda | 129
34. P R E & P O S T
CONDITION – RESOURCE VALIDATIONS

In this topic we will cover


• preconditions
• postconditions

Allows to define checks that must be true before a resource is created


(preconditions) and after a resource is created (postcondition).

We will be using “preconditions” and “postcondtions” in lifecycle{…}


(block)

Feature Preconditions Postconditions


Validation Before resource After resource
Time creation/update/destruction creation/update/destruction
Purpose Ensure input/configuration Validate resource properties
validity or outcomes
Error Displays if the condition Displays if the condition
Message evaluates to false evaluates to false
Use Cases AMI ID format validation, Instance state, S3 bucket
region checks, variable policy validation, output
checks checks

Syntax

preconditions

Subhabrata Panda | 130


postconditions

Examples

preconditions
Ensure the selected AMI starts with ami-:

postconditions
Ensure the EC2 instance has the desired state of "running":

Combined Example (Preconditions and Postconditions)


Subhabrata Panda | 131
Subhabrata Panda | 132
35. T E R R A F O R M S T A T E M O D I F I C A T I O N
Terraform state modification is necessary to manage changes in your
infrastructure that are not automatically reflected in the state file.
Common use cases include:

• Drift Management: When manual changes are made to


resources, and Terraform's state file no longer matches the actual
infrastructure.

• Refactoring: Moving resources to a new module or address


without destroying and recreating them.

• Removing Orphaned Resources: Deleting resources from state


without impacting actual infrastructure.

• Backend Migration: Updating the backend configuration or


moving state files to a new backend.

List all resources in the state

Purpose: Displays a list of all resources currently managed by


Terraform in the state file.
Usage: Useful for auditing the state file or verifying that resources are
being tracked.

Show details of a specific resource

Subhabrata Panda | 133


Purpose: Provides detailed information about a specific resource from
the state file, such as its attributes and metadata.
Usage: Helps in debugging or reviewing the current state of a
particular resource.

Move a resource to a different address

Purpose: Changes the address of a resource in the state file. This is


needed when resource names are updated in the code without
recreating them.
Usage: Prevents the destruction and recreation of resources when
refactoring Terraform configurations.
Example: Rename an S3 bucket resource:

Remove a resource from the state

Purpose: Deletes a resource from the state file without affecting the
actual infrastructure.
Usage: Useful for removing resources that are no longer managed by
Terraform or were manually created outside of Terraform.

Pull the current state

Purpose: Downloads the latest Terraform state from the remote


backend to view or modify locally.
Usage: Helps in analyzing or troubleshooting the current state file.

Subhabrata Panda | 134


Push a local state file to the remote backend

Purpose: Updates the remote backend with a local state file, ensuring
consistency across environments.
Usage: Use this cautiously to avoid overwriting valid state data.
Typically used for recovery or migration.

List all state commands

Purpose: Displays all available state management commands in


Terraform.
Usage: Reference this to explore state-related operations and their
options.

Subhabrata Panda | 135


36. T E R R A F O R M I M P O R T C O M M A N D

Terraform import is a command in terraform that allows you to import


existing infrastructure resources into your Terraform state.

Real-Life Scenario:
You’ve manually created resources in AWS, such as an S3 bucket or an
EC2 instance, but now you want Terraform to manage those resources
without recreating them. Terraform's import command allows you to
bring these existing resources into the Terraform state.

I have already created S3 bucket in mu AWS console

We will now create a new folder called tf-import-s3. Inside this folder
name “AWS”, we will add a file named main.tf, which will serve as the
entry point for defining the required Terraform configurations related to
module dependencies.

In “main.tf”

Subhabrata Panda | 136


Use the terraform import command to import the existing bucket into
the Terraform state.

If you see this output that means, it’s successfully imported

Now type “terraform state list”

Now type “terraform state show aws_s3_bucket.s3_bucket” to get all


the information of that particular resources

Now update the config file of resource “aws_s3_bucket”

Now type “terraform apply”

Subhabrata Panda | 137


37. T E R R A F O R M W O R K S P A C E

A Terraform Workspace is an isolated environment within a single


Terraform configuration, allowing you to manage multiple instances of
the same infrastructure without duplicating code.
Each workspace has its own state file, which keeps track of the
resources managed by Terraform in that specific workspace.

Uses of Terraform Workspaces:

1. Environment Isolation:
Use workspaces to manage different environments (e.g., dev,
staging, prod) with the same configuration.

2. Multi-Tenant Applications:
Manage separate infrastructure for different clients or tenants.

3. Avoid State File Conflicts:


Keep state files isolated for better organization and avoid conflicts
when managing resources.

How Workspaces Work:


1. The default workspace is called default.
2. You can create, switch, and manage workspaces using Terraform
commands.
3. Each workspace has its own state file stored in the backend.

Key Commands:

List all workspaces:

Create a new workspace:

Subhabrata Panda | 138


Switch to a workspace:

Delete a workspace (if empty):

Diagram: Terraform Workspaces Example


Here’s a simple diagram explaining workspaces:

Terraform Workspaces for Environment Isolation:

• dev: Contains resources for development (e.g., small EC2


instances).
• staging: Contains resources for testing.
• prod: Contains production-grade resources.

Each workspace manages its own state file and resources


independently, ensuring no overlap.

Command Flow Example:

Subhabrata Panda | 139


For this demo we will be using the previous terraform folder named “tf-
import-s3”

When you apply a Terraform configuration across multiple


workspaces, creating resources with the same name (like an S3
bucket) in each workspace will cause an error, as most resources (e.g.,
AWS S3 buckets) require unique names globally.

To resolve this issue, you can use the terraform.workspace


interpolation to include the workspace name in the resource's name,
ensuring it is unique for each environment.

Something like this

Sample of fi

Subhabrata Panda | 140


38. T E R R A F O R M C L O U D W I T H G I T H U B

Terraform Cloud is a managed service provided by HashiCrop that


facilitates collaboration on Terraform configurations

Providing features like


- remote state management,
- version control system (VCS) integration,
- automated runs, and
- secure variable management.

STEP-1 STEP-2
Now go to any browser and type Since you have an HCP account
“app.terraform.io” then this page and are logged in, click on
will appear "Continue with HCP account"

Subhabrata Panda | 141


STEP-3 STEP-4
Now click on “Sign in with Now again click on “Continue”.
GitHub”. Then tick on all the
terms and conditions then click
on “Continue”

STEP-5
Then this page will appear, then click on “Create organization”

STEP-6
Then give the “Organization name” then “Create organization”. Then
this layout will appear

Subhabrata Panda | 142


STEP-7
For the current demo we will be using “Version a new Workspace”.
Then this layout will appear

STEP-8
For now go to the GitHub and create a new repo like this

Subhabrata Panda | 143


STEP-9
Now go to terraform cloud workspace and click on GitHub under that
select “GitHub.com” then allow HCL to have access with GitHub, then
this layout will appear with all the repos.

Then select “tf-cloud-s3” then give the description


Finally click on “create” for creating the workspace

Now clone the repo in local directory. Then this is the file structure by
adding “main.tf”

In main.tf

Subhabrata Panda | 144


In terminal

For now in GitHub

Then go to terraform cloud workspace, you will see this layout

Subhabrata Panda | 145


Then you will find this error, as you have not set the variables in the
variables in terraform cloud workspace

To resolve this error

First go to AWS
➔ then IAM
➔ then go for users
➔ select “tf-user” as you are working in terraform as this users
➔ either create the access key id or reuse the id that you are using in
your local directory
➔ form their copy the “Access key” and “Secret Access key”

Second go to terraform cloud workspace

Search for Variables

Then this layout will appear

Subhabrata Panda | 146


In this either you can set the

Workspace variables
Variables defined within a workspace always overwrite variables from
variable sets that have the same type and the same key

Or
Variable sets (0)
Allow you to reuse variables across multiple workspaces within your
organization, recommend for creating a variable set for variables used
in more than one workspace.

For now we will be creating “Workspace variables”. To do that click


in “+Add variables”. As it is a environment variables click on
“Environment variables”

First add access key with the name “AWS_ACCESS_KEY_ID”


Then for aws secret access key we will be naming it as
“AWS_SECRET_ACCESS_KEY”

Subhabrata Panda | 147


Give the key and value and also tick the “Sensitive option”
Then click on “Add variables”

Now go to code that is present in local directory now just do the minor
change something like change byte_length =12 to byte_length = 13

Again, commit it now see the change iin the given layout

Click on “See details” then you will see the layout which is asking for
next information

For now click on “Confirm & apply”, then it will ask for giving the
comment and finally click on “Confirm plan”

Go to S3 bucket in AWS console for verification

Subhabrata Panda | 148


For seeing the states go to “States” option for seeing beautiful
representation

For destruction of s3-bucket.


Click on “Setting” you will see “Destruction and Deletion”

Click on “Queue destroy plan”

Then read the instruction carefully and fill in the blank

Then click on “Confirm and apply” → give the comment → click on


confirm plan

Resources will be deleted.

Subhabrata Panda | 149

You might also like