Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
34 views56 pages

Sep 13, 2022

Uploaded by

Hari Venkat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views56 pages

Sep 13, 2022

Uploaded by

Hari Venkat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 56

Cloud computing refers to the delivery of computing services, including

- servers
- storage
- databases
- software
- networking
- analytics
over the internet (or "the cloud") on a pay-as-you-go basis.

Rather than owning and maintaining physical servers and infrastructure,


users can rent or lease resources from cloud providers,
who operate and manage the underlying technology and infrastructure.

Benefits
- This enables users to scale up or down their resources quickly and easily to meet
changing business demands
- Paying for what they actually use.

Examples
- Amazon Web Services (AWS)
- Microsoft Azure
- Google Cloud Platform.

Day1
Motivation for Cloud adoption
- On Demand resources
- Cost effective
- No upfront costs
- Reduced operational costs
- Availability
- HA and DR
- Security
- shared responsibility
- Global footprint
- Global presence
- Increase innovation
- Agility
- Vertical scaling
- Horizontal scaling

Cloud Providers
- AWS - IaaS, PaaS and SaaS
- Azure - IaaS, PaaS and SaaS
- GCP - IaaS, PaaS and SaaS
- IBM - IaaS, PaaS and SaaS
- Oracle - IaaS, PaaS
- DigitalOcean - IaaS, PaaS
- Linode - IaaS, PaaS
- Heroku- PaaS
- MongoLab - PaaS - Database as a Service
- Confluent - PaaS - Ma I'maged Kafka as a Service
- Okta/Auth0 - Paas - Identity as a Service

Cloud services

- Core Services
- Compute, Storage and Networking

- Three levels of abstractions


- Infrastructure as a Service
Cloud provider manages
- Provisioning
- Virutalization
- Security of physical resources
Customer
- Patching
- Monitoring
- Availibility
- Scaling
Companies offerring IaaS - DigitalOcean, Linode, Vultr
- Platform as a Service
Cloud provider manages
- Provisioning
- Virutalization
- Security of physical resources
- Patching
- Monitoring
- Availibility
- Scaling
- Hosting
- Backups
Customer
- Deploy the application
- Data
Companies - Heroku, Confluent, MongoLab
Examples
- RDS,
, MSK
- Software as a Service
Cloud provider manages
- Provisioning
- Virutalization
- Security of physical resources
- Patching
- Monitoring
- Availibility
- Scaling
- Hosting
- Backups
- Deploy the application
- Data
Customer
- Subscription charges
Examples
- Office 365, Zoho, Quickbooks, Netflix

- Function as a Service (Serverless)


- Container as a Service (EKS, ECS)
- Database as a Service
- Identity as Service - Okta, WSO2
- Desktop as a Service - Amazon Workspaces
- XaaS - Anthing as a Service

Deployment models
- On Prem - Private cloud
- Defence sectors
- Integellince service
- Govt sector projects
- Hybrid
- Migration plan to Cloud
- Part of the application on prem and other service on Cloud

- Public cloud
- All services are leveraged by cloud

AWS API
- Management console (UI)
- Users
- AWS CLI
- scripting
- SDK
- Programming languages
- Support for Java, Python, Go
- CloudFormation
- Infrastructure as a Service

All API requests are intercepted for


- Authentication
- Are you the same person you claim to be
- Prove your credentials
- Username/password
- Biometric
- OTP/TOTP
- SSH keys
- Certificate
- Kerberos
- OpenId Connect
- Secure key
- Chip and Pin
- Swipe
- Authorization
- Do you have the necessary permissions to perform the job
- By default in AWS, every action is a implicit DENY unless explicitly ALLOWD

Factors impacting the region selection


- Governance/Compliance
- Latency
- Cost
- Service availiability

Global services
- AWS Identity and Access Management (IAM): IAM is a global service that enables
you to manage access to AWS resources across all regions. IAM users, groups, and
roles you create are not tied to a specific region.
- AWS CloudFront: CloudFront is a global content delivery network (CDN) service. It
has edge locations spread across multiple continents and is not limited to a single
region.
- Amazon Route 53: Route 53 is a global DNS service. It allows you to manage DNS
records and resolve domain names worldwide without being constrained by region-
specific boundaries.
- AWS WAF (Web Application Firewall): WAF is a global service that provides
protection against web application exploits and attacks. You can configure WAF
rules globally and apply them to your web applications in any region.
- AWS Certificate Manager (ACM): ACM is a global service that simplifies the
process of provisioning, managing, and deploying SSL/TLS certificates for your AWS
resources. Certificates obtained through ACM can be used in any AWS region.
- Amazon CloudWatch: CloudWatch is a global monitoring service that collects and
tracks metrics, logs, and events from various AWS resources. It operates across
multiple regions, allowing you to monitor and analyze data from a centralized
location.
- AWS Direct Connect: Direct Connect is a global service that provides dedicated
network connections between your on-premises infrastructure and AWS. It is
available in various locations worldwide, enabling private and reliable
connectivity.

Amazon Macie:
This is a data security and privacy service that uses machine learning to
automatically discover,
classify, and protect sensitive data in AWS. It is currently available in only 8
regions.

Amazon GameLift:
This is a managed service for deploying, operating, and scaling session-based
multiplayer games.
It is currently available in only 6 regions.

Amazon RDS on VMware:


This is a service that allows you to run Amazon Relational Database Service (RDS)
on-premises in your own data center.
It is currently available in only 3 regions.

Amazon Aurora with PostgreSQL compatibility:


This is a high-performance, highly available, and scalable relational database
service that is compatible with PostgreSQL.
It is currently available in only 4 regions.

AWS RoboMaker:
This is a service that makes it easy to develop, simulate, and deploy intelligent
robotics applications
at scale. It is currently available in only 6 regions.

Cloud deployment models

On prem
- Infra will be managed by the Organizations
- Pre cloud
Usecases
- Org has invested huge amount in Infrastrucuture
- Org has the operational capability and is very complex
- Org has strict regulation/compliance, Ex: Defence, R&D, Juciary, Govt
organization

Hybrid
- Part of the infra will be on on-prem and part of infra is cloud
- Migration application

Usecase
- During Migration
- Company is evaluating the cloud strategy
- Licence/Lease with the On-Prem is about to terminate
- Leverage only few services from cloud

Cloud
- All the services are hosted on the cloud
Usecases
- For cloud native applications
- Greenfield projects
- startups

Identity and Access Management


- Anonymous users
- Principal/Identity - User/entity/application who is authenticated
- Users
- program
- application
- service

- Microsoft Azure
- Equivalent of IAM (Identity and Access Management) is Azure Active Directory
(Azure AD).
- Cloud-based identity and access management service
- Allows you to manage and secure your organization's users, devices, and
applications.
- It provides features for authentication, single sign-on (SSO), role-based access
control (RBAC), multi-factor authentication (MFA), and more.

- Google Cloud Platform (GCP)


- Equivalent of IAM is also called Identity and Access Management (IAM).
- IAM allows you to control access to resources and services within Google Cloud
projects. It enables you to manage users, groups, and permissions, implementing the
principle of least privilege to secure your cloud infrastructure.
Both Azure AD and GCP IAM serve similar purposes in their respective cloud
platforms,
providing centralized and secure access management to various resources and
services within the cloud environment.

Install aws cli


https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
aws --version
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-
user.html
aws iam create-user --user-name Bob --profile aws-training
aws configure --profile aws-training
enter your access key => <accesskey>
enter your access secret => <accesssecret>
enter region => ap-south-1
output format => json
aws iam create-user --user-name Bob --profile <your-name>
aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --
user-name Bob --profile aws-training
aws iam detach-user-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --
user-name Bob --profile aws-training
aws iam delete-user --user-name Bob --profile aws-training

MAC/Linux
cd ~/.aws
cat config
cat credentials

Windows
C:\Users\<username>\.aws
notepad config
notepad credentials

https://awscli.amazonaws.com/v2/documentation/api/latest/index.html
EC2 commandline reference -
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/index.html
Run EC2 instances -
https://awscli.amazonaws.com/v2/doumentation/api/latest/reference/ec2/run-
instances.html
Attach EC2FullAccess to the cli-user
aws ec2 run-instances --image-id <ami-0e89f04ea160a6f51> --instance-type t2.micro
--key-name aws-ec2-connect --profile aws-training
aws ec2 terminate-instances --instance-ids <instance-id> --profile aws-training

Login to the EC2 instance


Linux/Mac users
cd ~/.aws
ls
cat credentials
cat config
Windows users
C:\Users\<yourname>\.aws
notepad credentials
notepad config

Limitations of storing the credentials in plain text


- The credentials are stored in plain text.
Hence, anyone who has access to the ec2-instance will be able to see the
credentials
Privlige escalation security vulnarability
- Rotate the credentials, it becomes difficult to manage the credentials updation

Recommended best practice


- Use IAM Role

# rm credentials
# aws ec2 run-instances --image-id ami-0e89f04ea160a6f51 --instance-type t2.micro
--key-name aws-ec2-connect

Dedicated instance and Dedicated host - https://aws.amazon.com/ec2/dedicated-hosts/

Terminate ec2 instance


- https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/terminate-
instances.html
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0

Three ways of buidling application and deploying on Cloud


- Traditional IT / Monolith
- Compute
- instances
- Load Balancers
- Auto Scaling groups
- Monitoring tools
- DevOps
- Micoservices
- Small, Independent deployable
- Loosely coupled
- organized into small teams
- Containers
- Containers as first class citizens (Docker, K8s, Openshift)
- Scale, discoverable, availability
- EKS, AKS, ECS, GKE
- Servlerless
- Vendor specific solutions
- No infrastructure overhead
- Easily scalable
- SQS, SNS, API Gateway, Lambda, Step functions, DynamoDB

Lab-1
Create a key pair
default values
- pem for mac users
- ppk for Putty tool

Security group -> firewall ports


Create a security group
- name - ssh-security-group
- desciption -
- Inboud rule
add SSH
source - Anywhere - IPv4
- Tag
key -> Name
value -> ssh-security-group

Instance
- Click on Launch instances
- Name -> test-vm
- OS Image -> Amazon-linux-2
- Instance type -> t2-micro
- Select the key-pair created
- Select the existing security group
- Go with the default storage (8 GB)
- Click on the launch instance

Logging into the instance


- Click on the checkbox
- Click on the connect button
- Choose the SSH client docker
-
- Select the example icon and connect to the instance using the terminal
EC2-instance
sudo yum -y update

Lab-2 - Install Nginx


command to path the updates - sudo yum -y update
install git - sudo yum -y install git java-11-amazon-corretto telnet
Install Nginx web server - sudo yum install nginx -y
this command starts the server after the reboot - sudo systemctl enable nginx
command to start the Nginx server - sudo systemctl start nginx
command to verify if the server is running - sudo systemctl status nginx
Verify that the server is running - curl http://localhost:80

Navigate to Management console and copy the public IP address of the instace
Navigate to the browser and hit http://<ip-address>:80

Navigate to the security group


select the ssh-security group
edit the inbound rules
add a new rule allowing port number 80 from Anywhere-IPv4
Save the rule and test the http://<ip-address>:80 from the browser

Instance types - https://aws.amazon.com/ec2/instance-types/

Lab-2
Add the below under user data script in the Advance details section

For Amazon Linux 2


#!/bin/bash
sudo yum update -y
sudo yum install nginx -y
sudo systemctl enable nginx
sudo systemctl start nginx

In the security group, add the below entry


port number- 80, allow from ipv4 anywhere

Check the IP address from the browser http://<public-ip-address>

Status checks
- https://support.arcserve.com/s/article/202041339?language=en_US

- AWS EC2 (Amazon Web Services): "User data"


- Azure VM (Microsoft Azure): "Custom data"
- GCCompute Engine (Google Cloud Platform): "Startup script"

AMI - Amazon Machine Image


- Immutable
- Read only
- AMI is specific to a region
- Copy the AMI to another region

Convert an EC2-instance <-> AMI

Creating a launch template


- Give the template a name
- Select the AMI
- Select the instance type to t2.micro
- Select the existing keypair
- Select the existing security group
- Click on the launch template

- AWS uses "Launch Templates" or "Launch Configurations" to define and manage the
specifications for launching instances.
- The equivalent in Azure is "Virtual Machine Scale Sets" (VMSS) which lets you
create and manage a group of identical, load balanced VMs.
- In Google Cloud Platform (GCP), the equivalent service is "Instance Templates".
They allow you to specify the settings for your instances, which can be used to
create instances in a managed instance group or individually.

Auto scaling groups


- In AWS it is referred to as "Auto Scaling group"
- In Azure it is referred to as "Virtual Machine Scale Sets"
- In GCP it is referred to as "Instance Group"

Creating an Auto-scaling group


- Name of the ASG - nginx-asg
- Select the launch template created earlier
- Default VPC
- Select ap-south-1a and ap-south-1b

Associating a Load balancer to the instances


- Create a security group called alb-security-group
source - 80 from 0.0.0.0/0
- Create a Loadbalancer - Application Load Balancer
- Create a target group
- ASG we need to associate the load balancer to the target group

- SSH to one of the EC2 instance


- navigate to /usr/share/nginx/html directory
- sudo vi /chmod
- background-color: green

Deleting the resources


- Delete the Load Balancer
- Delete the Auto scaling group
- Delete the Target group
- Delete the Launch template
- Deregister the AMI
- Delete the snapshot

AWS CLI
Installation - https://docs.aws.amazon.com/cli/latest/userguide/getting-started-
install.html
verification of aws-cli - aws --version
Command Line reference document - https://docs.aws.amazon.com/cli/latest/index.html
Creating an IAM user
aws iam create-user --user-name test-user
Create EC2 instance - https://docs.aws.amazon.com/cli/latest/reference/ec2/run-
instances.html
aws ec2 run-instances --image-id ami-072ec8f4ea4a6f2cf --instance-type t2.micro --
key-name pradeep-ec2-keypair
Terminate EC2 instance - i-0275888172286a8b1
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0

Rule of minimum previligous

Example
aws ec2 run-instances --image-id ami-0c768662cc797cd75 --instance-type t2.micro --
security-group-ids sg-065ce32c0989b1954 --key-name ssh-connect

describe the ec2 instances - aws ec2 describe-instances


terminate the instance - aws ec2 terminate-instances --instance-ids <instance-id>

Security - IAM

Identity based policy


policy has been attached to the group

Identity based policy


- Identity/Principal - Logged in entity

Identitiy/Principal can be
- User group
- User
- Role
ARN - https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
{
"Version": "2012-10-17",
"Statement": [
{
# effect - Allow/Deny
"Effect": "Allow",
#verbs
"Action": "*",
# resource
"Resource": "*"
}
]
}

Lab-3

- Update - sudo yum -y update


- Install java-11 version - sudo yum install -y java git
- java -version
- git version
- Clone the git repo - git clone
https://gitlab.com/12-12-vmware-microservices/orders-microservice.git
- cd into the orders-microservice directory - cd orders-microservice
- sh mvnw clean package -DskipTests
- cd target
- java -jar order-microservice-0.0.1-SNAPSHOT.jar

Storage
- Three types of Storage
- Block - EBS
- File Storage - EFS
- Object storage - S3

Block storage
- Elastic Block storage (EBS)
- The data is split into discrete blocks
- For efficient read and write
- There are two types of disk - SSD, HDD
- Upfront provisioning of capacity
- You will be charged for the disk space and disk input-output (I/O)
- EBS volume is tied to a az
- At any given point in time, it can be attached to only one EC2 instance
- One EC2 instance can be assigned with multiple EBS volumes
- EBS volume is specific to an availability zone
- The EC2 instance should also be in the same AZ as of the EBS volume
- Direct attach storage and Storage area Network
- Azure - Azure Disk storage
- GCP - Google persistent Disk
- Types of EBS volumes
- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
- General purpose (GP)
- IO
- SSD
- HDD

EC2 placement groups -


https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

Considerations for IOPS


To calculate the IOPS (Input/Output Operations Per Second) for an Amazon Elastic
Block Store (EBS) volume, you need to consider the following factors:
EBS volume type:
Amazon EBS provides four types of volumes
- General Purpose SSD (gp2)
- Provisioned IOPS SSD (io1)
- Throughput Optimized HDD (st1)
- Cold HDD (sc1)

Volume size:
- The IOPS performance of an EBS volume is directly proportional to the volume
size.
- Larger volumes generally have higher IOPS performance than smaller ones.

Burst performance:
- Some EBS volume types, such as gp2, provide burst performance.
- This means that the volume can deliver IOPS beyond its baseline level for a
limited time
depending on the volume size.

Provisioned IOPS:
- The maximum IOPS you can provision for an io1 volume is 64,000.

To calculate the IOPS for an EBS volume, use the following formula:

IOPS = Volume Size (in GiB) x IOPS per GiB

For example, if you have a 500 GiB gp2 volume, the IOPS performance would be:
IOPS = 500 x 3 = 1500 IOPS (gp2 volumes deliver 3 IOPS per GiB)

If you have an io1 volume with a provisioned IOPS of 5000 and a volume size of 100
GiB, the IOPS performance would be:
IOPS = 5000 (provisioned IOPS)

1. Identify the performance metrics:


Identify the performance metrics you want to monitor, such as IOPS,
throughput, latency, queue depth, and disk utilization.
2. Use operating system tools:
Most operating systems have built-in tools that allow you to monitor
storage performance. For example, on Linux systems, you can use tools such as
iostat, vmstat, and sar to monitor IOPS, throughput, and other metrics. On Windows
systems, you can use tools such as PerfMon or Resource Monitor.
3. Use vendor-specific tools:
Many storage vendors provide tools that allow you to monitor the performance
of their storage systems. These tools may provide more detailed information about
the storage system's performance than operating system tools. For example, EMC
provides Unisphere for VNX systems, and NetApp provides OnCommand Performance
Manager for its storage systems.
4. Use third-party tools:
There are many third-party tools available that can help you monitor storage
performance. Some popular options include Nagios, Zabbix, and PRTG Network Monitor.
5. Set up alerts: Set up alerts for key performance metrics to be notified when
performance issues arise. This will help you proactively address performance issues
before they become critical.

Storage -
1. Attach the EBS volume to the EC2 instance from the management console
2. Login to the EC2 instance
3. Run the df -h command
4. Run the lsblk command - to list the block size
5. Run the command to check the file system - sudo file -s /dev/xvdf
6. To create the file system, run the command - sudo mkfs -t xfs /dev/xvdf
7. Create a directory with the command - sudo mkdir -p /app/data
8. Mount the volume to the /app/data directory with the command - sudo mount
/dev/xvdf /app/data
Add files into the shared volume
cd /app/data
sudo touch file.txt

9. Navigate to the home directory - cd ~/

10. Unmount the volume to the /app/data directory with the command - sudo
umount /app/data
11. Detach the volume from the ec2 instance in the EC2 dashboard volume

Optional
Create another ec2 instance in the same az name - second-instance
In the volume section, detach the volume and attach the volume to the second
instance
SSH to the second instance
Run the command to check the file system - sudo file -s /dev/xvdf
Create a directory with the command - sudo mkdir -p /app/data
Mount the volume to the /app/data directory with the command - sudo mount
/dev/xvdf /app/data
Check for the file.txt using the ls inside the /app/data directory

Use cases:
- Creating a backup
- encrypt the volume
- Move the volume from one AZ/Region to another
- Change the storage type

Solution -> Snapshot

snapshot <=> volume


volume <=> snapshot

Elastic File Storage


- File storage - NFS
- Dynamically provisoned - Scales automatically
- Pay for what you use
- Managed service specfic to a region
- Can be attached with multiple EC2 instances from across AZ
- EC2 instances can concurrently modify and access the files
- Specific to region and not availability zone

Use-cases
- Can be used as a file storage
- To create repositories
- To created shared file system

Lab - https://docs.aws.amazon.com/efs/latest/ug/wt1-test.html

EFS- volume

Steps for mounting the EFS volume - https://docs.aws.amazon.com/efs/latest/ug/wt1-


test.html
EFS quota and limits - https://docs.aws.amazon.com/efs/latest/ug/limits.html
- Network attached storage
- Span across availability zones in a region
- Dynamically grow and shrink
- Mounted to multiple EC2 instances

- Azure files - Azure


- Azure offers Azure Files, which is a fully managed file share service that can
be accessed via the Server Message Block (SMB) protocol or the Network File System
(NFS) protocol.
- Azure Files provides a scalable and highly available file share that can be
used as shared storage for applications running in the cloud or on-premises.
- Like EFS, Azure Files offers a simple and flexible way to store and share files
across multiple instances.

- Gloud Filestore - GCP


- GCP offers Cloud Filestore, which is a managed file storage service that
supports the industry-standard NFS protocol.
- Cloud Filestore provides a fully managed NFS file share that can be used for a
variety of use cases, such as content management, home directories, and application
storage.
- Cloud Filestore offers high performance and availability, and it can be easily
integrated with other GCP services such as Compute Engine and Kubernetes Engine.

Lab on EFS
- Create a NFS security group
- allow inbound connection to NFS port(2049) from anywhere (0.0.0.0/0)
- Create a EFS volume in multiple AZ
- Create two EC2 instances in two different availability zones
- Login to both the instance
- Run the following commands in bothe the instances
- sudo yum -y update
- sudo yum -y install nfs-utils
- mkdir -p ~/data
- In NFS, mount target to the availability zones
- Edit the default security group being used by NFS to allow all traffic from
security group used by the instances
- Mount the NFS using the instructions in EFS attach screen
sudo mount /dev/xvdf /app/data
sudo mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-
017979aaa4c5bdd13.efs.ap-south-1.amazonaws.com:/ ~/data
Validate with df -h on ec2 instances
Troubleshooting steps
- https://docs.aws.amazon.com/efs/latest/ug/troubleshooting-efs-mounting.html
- DNS server not available
- service rpcbind restart
- service nfs restart

Object storage
- Unstructured data
- Cannot mount this to the instances
- Access object via the S3/HTTPS protocol stored in bucket
- S3 bucket is specific to region
- objects are stored using key value pair
- key should be unique
- To modify the object, you need to download the entire object, modify and upload
the object

Properties of S3
- Supports Versioning - Can be used to store multiple version of the document
- Host Static websites
- Encryption at rest using AES-256

Azure (Azure Blob Storage):


- Fully managed cloud object storage service
- Supports unstructured data such as images, videos, documents, and log files
- Provides high availability and durability for data
- Supports multiple access tiers, including hot, cool, and archive
- Can help optimize costs based on usage patterns

GCP (Google Cloud Storage):


- Scalable and highly available object storage service
- Can be used for data backup and archiving, content delivery, and application data
storage
- Offers different storage classes, including Standard, Nearline, and Coldline
- Helps balance performance and cost based on specific needs.

Use-cases
- To store media, audio, log files, documents
- Host static websites
- Data in encrypted at rest using AES-256
- Supports versioning
- Managed solution and is a serverless offerring
- Data in is free
- Data out is chargeble

Amazon S3 (Simple Storage Service) is a widely used object storage service provided
by AWS.
Many companies and organizations across various industries utilize S3 for their
storage needs.
While it is impossible to provide an exhaustive list, here are some well-known
companies that use Amazon S3:
S3 pricing - https://aws.amazon.com/s3/pricing/?p=pm&c=s3&z=4

- Netflix: Netflix uses S3 to store and deliver streaming content to millions of


subscribers worldwide.
- Airbnb: Airbnb utilizes S3 to store and manage various types of data, including
user-generated content, images, and other files.
- Pinterest: Pinterest relies on S3 for storing and serving user-generated images,
photos, and other media files.
- NASA: NASA leverages S3 to store and distribute a vast amount of satellite
imagery, scientific data, and research materials.
- Spotify: Spotify uses S3 to store music files and deliver audio content to its
users.
- Reddit: Reddit utilizes S3 to store images, media files, and other user-generated
content shared on the platform.
- Dow Jones & Company: Dow Jones, a publishing and financial information firm, uses
S3 for storing and managing its vast collection of historical news archives and
other data.
- Lyft: Lyft, a popular ride-sharing platform, relies on S3 for storing various
types of data, including user profile pictures, ride history, and other files.
- Slack: Slack, a collaboration and communication platform, uses S3 for storing
files, documents, and attachments shared by users within their channels.

Policy examples - https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-


bucket-policies.html
Resource based policy -
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-
with-iam.html
Bucket policy

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::classpathio-media/*"
]
}
]
}

ARN - https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html
- Amazon Resource name
- Unique string representing the resource
- Format of ARN - arn:partition:service:region:account:resource

S3 Policy Examples - https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-


bucket-policies.html

Lab - Assume role to access S3 bucket


=============
1. Create an S3 bucket
2. Add some files into the bucket
3. Create a bucket policy - Allows external users with bucket permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::703363723066:user/admin_new",
"arn:aws:iam::688964866425:user/admin",
"arn:aws:iam::915465141737:user/admin",
"arn:aws:iam::008152901260:user/admin"
]
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::aws-training-3536/*"
}
]
}
arn:aws:iam::831955480324:user/admin
4. Create a custom IAM Policy for S3 read access

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": "*"
}
]
}
5. Attach the policy with a Role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::703363723066:user/admin_new",
"arn:aws:iam::688964866425:user/admin",
"arn:aws:iam::008152901260:user/admin",
"arn:aws:iam::915465141737:user/admin"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}

- S3 pricing - https://aws.amazon.com/s3/pricing/

6. Share the link for assuming the role


Databases:
- Databases can be classified into
- Relational
- MySQL, Oracle, Sybase, Postgresql
- Apply the normalization technique to eleminate the anamolies (Creation,
Updation, Deletion)
- Database manages the data integrity using the constraints
- The data is split into multiple tables
- Vertical scaling
- CA are met from the CAP theoremn
- strick adherance to the schema
- Suitable for transactional workload
- Non-Relational also referred as Not-Only-SQL or No-SQL
- Can scale horizontally
- Since it is parition tolerant, you can choose either CP or AP
- Document
The record is stored with documents in the format of Json/Bson
Ex - MongoDB
- Key-Value
- The data is stored in key-value format
- Key should be unique
- High throughput with a throughput of 10 trillion messages per day
- Managed serverless database
- Automatically scale
- Does not have persistent connection
dynamodb.<region-name>.amazonaws.com
dynamodb.ap-south-1.amazonaws.com
Ex: DynamoDB
- In-memory
- The data is store in memory for faster access
- Can be used to store session data, cache
- As a datastore
Ex: Redis
- Columnar
- Data is stored in columns
Ex: Cassandra
- Ledger
- Graph
- Store the data as nodes
- Used for social networking, connections
- Neptune
- Time series

RDS Lab
- Create a RDS
- Login to the EC2 instance
- Update the packages - sudo yum -y update
- Not needed - Install the Mariadb server - sudo dnf install mariadb105-server
- Install Mysql client
-
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToInstance.html
- sudo dnf install mariadb105
- Connect to the Mysql Server - mysql -u admin -h <db-endpoint> -p
mysql -u admin -h database-1.c4xbzwxvwwsz.ap-south-1.rds.amazonaws.com -p
- Enter the password in the console:

Run the following commands after connecting to the RDS instance


- create database employees;
- show databases;
- use employees;
- create table employees(id int primary key auto_increment, name varchar(24), age
int);
- show tables;
- insert into employees (name, age) values ("Ramesh", 33);
- select * from employees;

DynamoDB
- Fully Managed NoSQL Database Service:
Amazon DynamoDB takes away the complexity of managing a database, handling
tasks like hardware provisioning, setup,
configuration, replication, software patching, and cluster scaling.
- Seamless Scalability:
DynamoDB allows you to scale up or down your databases according to your
application needs,
without any downtime or performance degradation.
- Event-driven Programming:
With DynamoDB Streams, you can capture table activity, and trigger specified
actions based on this data,
perfect for real-time processing.
- Automatic Partitioning:
To support your throughput requirements, DynamoDB automatically partitions your
tables over an adequate number of servers.
- Built-in Security:
Amazon DynamoDB provides built-in security features like encryption at rest,
allowing you to secure your data and meet compliance requirements.
- Low Latency: DynamoDB is designed to provide consistent, single-digit millisecond
latency for read and write operations,
making it suitable for high-speed applications.
- In-memory Caching: DynamoDB Accelerator (DAX) provides an in-memory cache for
DynamoDB, to deliver faster access times for frequently accessed items.
- On-demand and Provisioned Capacity Modes: DynamoDB allows you to choose between
on-demand capacity (for flexible, pay-per-request pricing) and provisioned capacity
(for predictable workloads and cost efficiency).
- Global Tables: This feature enables multi-region replication of tables, providing
fast local performance for globally distributed applications.
- Integrated with AWS ecosystem: As part of the AWS ecosystem, DynamoDB integrates
seamlessly with other AWS services like AWS Lambda, Amazon EMR, Amazon Redshift,
and Amazon Data Pipeline.

Scalability and Performance:

DynamoDB offers seamless horizontal scalability, automatically distributing data


across multiple partitions.
Provides consistent, low-latency performance regardless of data volume,
accommodating high traffic and sudden spikes.
Flexible Schema and NoSQL Modeling:

DynamoDB allows flexible schema design, accommodating varying data structures


without strict pre-defined schemas.
Supports nested and complex data types, enabling efficient storage of diverse data.
High Availability and Fault Tolerance:

DynamoDB offers built-in replication and multi-region support for high


availability.
Provides automatic data backups and data durability, ensuring data integrity even
during failures.
Fully Managed Service:

DynamoDB is a managed service, handling infrastructure provisioning, scaling, and


maintenance.
Reduces operational overhead, allowing developers to focus on application logic
rather than database management.
Cost Efficiency and Pay-as-You-Go Pricing:

Offers a flexible pricing model based on provisioned capacity or on-demand usage.


Cost-effective for varying workloads, as you only pay for the resources you
consume.

Access Patterns Analysis:

Understand the most common query patterns your application will have, such as how
data will be retrieved, updated, or queried.
Identify the primary ways data will be accessed to design an efficient partition
key.
Uniform Data Distribution:

Choose a partition key that distributes data uniformly across partitions to avoid
"hot" partitions with excessive read or write activity.
Uneven data distribution can lead to performance bottlenecks and throttling.
Cardinality and Selectivity:

Opt for a partition key with high cardinality (a large number of distinct values)
to distribute data evenly.
Ensure the partition key has good selectivity, meaning it's used frequently for
queries and provides a diverse range of values.
Query Isolation:

Consider how well the chosen partition key isolates different types of queries from
each other.
Queries with different partition keys can run concurrently without causing
contention or performance issues.
Data Growth and Size:

Anticipate the potential growth of data over time and choose a partition key that
can accommodate future expansion.
Avoid partition keys that lead to partitions becoming too large, as it can impact
performance and scalability.

Read Capacity Units (RCUs)


Choose the Read Type: Eventually consistent reads consume half the RCUs as strongly
consistent reads.

Strongly Consistent Read:


1 RCU per read per second (for items up to 4 KB)
Eventually Consistent Read:
0.5 RCU per read per second (for items up to 4 KB)
Calculate Item Size:
If your items are larger than 4 KB, you'll need more RCUs.

Total RCUs = (Item Size in KB / 4) * Read Type Factor


Factor in the Read Operations: Multiply the RCUs by the expected reads per second.

Total RCUs = RCUs per read * Number of Reads per Second


Example:
Suppose you need to perform 100 strongly consistent reads per second, and the items
are 8 KB each.

Item Size Factor: 8 KB / 4 = 2


Strongly Consistent Reads: 1 RCU * 2 (from Item Size) = 2 RCUs per read
Total RCUs: 2 * 100 = 200 RCUs
Write Capacity Units (WCUs)
Calculate Item Size: 1 WCU for a write per second (for items up to 1 KB)

Factor in the Item Size: If your items are larger than 1 KB, you'll need more WCUs.

Total WCUs = (Item Size in KB) * Number of Writes per Second


Example:
Suppose you need to perform 50 writes per second, and the items are 2 KB each.

Total WCUs: 2 KB * 50 = 100 WCUs


So in this example, you would provision 200 RCUs and 100 WCUs for your DynamoDB
table.

Lab:
- Create a DynamoDB table. It is region specific
- Name of the table - <your-name>-employees
- Partition key - id, type -> Number
- Sort key - name, type -> String

Using CLI
aws dynamodb put-item --table-name employees --item '{"id": {"S": "101"},"name":
{"S": "John Doe"}}'
aws dynamodb put-item --table-name employees --item '{"id": {"S": "101"},"name":
{"S": "John Doe"},"city": {"S": "Mangalore"},"zip": {"S": "577142"}}'

aws dynamodb query --table-name employees --key-condition-expression "id


= :idValue" --expression-attribute-values '{ ":idValue": {"S": "101"} }'
aws dynamodb query --table-name employees --key-condition-expression "id
= :idValue" --expression-attribute-values '{ ":idValue": {"S": "101"} }'

aws dynamodb scan --table-name employees

aws dynamodb scan --table-name employees --limit 10

Local Secondary index - LSI


- Can be created only during creation time
- Cannot change the partition key
= Can be used to project the desired fields

Global Secondary index - GSI


- Can be created after table creation
- Can change the partition key and the sort key
- Can be used to project the desired fields
- Will consume both space the RCU/WCU

SQS
- A Serverless offerring from AWS
- Offerred in Standard and FIFO
- Standart type provides atleast once delivery semantics
- FIFO type provides exactly once delivery semantics
- Point to point and decouple Producer from Consumer
- Acts as a buffer
- The producer sends the message to the Queue
- The consumer polls the message from the Queue
- The visibility time specifies till how long the message will be available for the
consumer to read
- The consumer should delete the message after processing

Lab:https://sqs.ap-south-1.amazonaws.com/831955480324/messages
aws sqs send-message --queue-url
https://sqs.YOUR_REGION.amazonaws.com/YOUR_ACCOUNT_ID/YOUR_QUEUE_NAME --message-
body "Your message text here"
aws sqs receive-message --queue-url
https://sqs.YOUR_REGION.amazonaws.com/YOUR_ACCOUNT_ID/YOUR_QUEUE_NAME --max-number-
of-messages 1
aws sqs delete-message --queue-url
https://sqs.YOUR_REGION.amazonaws.com/YOUR_ACCOUNT_ID/YOUR_QUEUE_NAME --receipt-
handle YOUR_RECEIPT_HANDLE_HERE

Using Java SDK


https://gitlab.com/19-06-23-cisco-aws-dev/sns-producer

SNS - Simple Notification Service


- A serverless offerring from AWS for Publish Subscribe
- Offerred in Standard and FIFO
- Publisher publishes the message to the TOPIC
- Subscribers subscribe to the topic
- The messages are pushed to all the subscribers
- No message retention
- One to many

AWS CLI
aws --version
aws s3api list-buckets
aws configure

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListAllMy niBuckets",
"s3:DescribeJob",
"s3:ListBucket",
"s3:GetBucketVersioning",
"s3:GetBucketPolicy"
],
"Resource": "*"
}
]
}

Resource based policy


-
AWS CLI - command line reference
https://docs.aws.amazon.com/cli/latest/reference/ec2/index.html

10.0.0.0/24 - VPC

https://aws.amazon.com/premiumsupport/knowledge-center/vpc-ip-address-range/
- VPC stands for Virtual private cloud
- It is a private cloud in AWS
- Defines the network boundary
- VPC is a regional resource
- VPC spans across AZ
- There is a default VPC for every region
- You can create up to 5 VPC per region
- Defines the IP address pool
- Every instance/loadbalancer/EFS/kafka-worker-nodes/eks-worker-nodes will borrow
the private IP address from the Pool
- You can request for more VPC per region by raising a support ticket with AWS
support team
- Two different VPC cannot have the overlapping IP address with vpc peering
- Practical use cases of VPC is
- For creating network isolation for a product with multiple environments
- For creating network isolation for different products/applications

Networking topics
- IP address represents a unique address/coordinate within a network
- IP address can be in IPv4 or IPv6
- IPV4 is made up of 4 octets 10.0.0.0
- Each octet is made of 8 bits
- 8 bits - 2 pow 8 - 256 bits, range - 0 - 255
- IP-V4 range 0.0.0.0 - 255.255.255.255
- Private VPC - you chose an IP address range

Useful tools
- https://cidr.xyz
- https://www.ipaddressguide.com/cidr

CIDR range - Classless interdomain routing


10.0.0.0/x
- x represents the reserved bits
- IP address range will be 2 pow (32-x)

Lab

- Creating a VPC
- Create a VPC in the Mumbai region
- select VPC only radio button
- Name of the VPC - <name>-lab-vpc
- Select the IPv4 CIDR manual input radio button
- Enter the CIDR block - 10.0.0.0/24 range 10.0.0.0 - 10.0.0.255 (256 IP
addresses)
- Leave all the other fields to default values and create the VPC
Subnet
- Represents subset of the IP address pool
- Group related instances into subnets
- Subnets are specific to Availability zone

Subnet-1
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- public-a
- Availability zone - ap-south-1a
- CIDR block - 10.0.0.0/26
- Range of the CIDR block - 10.0.0.0 - 10.0.0.63 (total 64 IP addresses)

Subnet-2
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- public-b
- Availability zone - Volume
- CIDR block - 10.0.0.64/26
- Range of the CIDR block - 10.0.0.64 - 10.0.0.127 (total 64 IP addresses)

Subnet-3
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- private-a
- Availability zone - ap-south-1a
- CIDR block - 10.0.0.128/26
- Range of the CIDR block - 10.0.0.128 - 10.0.0.191 (total 64 IP addresses)

Subnet-4
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- private-b
- Availability zone - ap-south-1b
- CIDR block - 10.0.0.192/26
- Range of the CIDR block - 10.0.0.192 - 10.0.0.255 (total 64 IP addresses)

https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html
10.0.0.0: Network address.
10.0.0.1: Reserved by AWS for the VPC router.
10.0.0.2: Reserved by AWS.
10.0.0.3: Reserved by AWS for future use.
10.0.0.255: Network broadcast address.
We do not support broadcast in a VPC, therefore we reserve this address.

Internet Gateway
- Allows internet connectivity (both inbound and outbound) with the VPC
- Create an Internet gateway called lab-igw
- Internet gateway has a one-to-one relationship with the VPC
- Attach it to the lab-vpc

Route table
- Route table is an ledge for specifying the route entries
- Route table associates the subnets to Internet gateway/NatGateway
- A default route table is created for every VPC
- The name of the route table is main and it cannot be deleted
- create a route table public-route - in the lab-vpc
- To the public-route table associate the two public subnets(public-a and public-b)
explicitly
- Edit the route and entry with 0.0.0.0/0 for internet-gateway

Settings at VPC level


- Under the edit VPC settings - check the enable the DNS hostnames checkbox

Settings at the public subnets


- For both the public subnet, -> edit subnet settings -> Enable auto-assign public
IPv4 address to true

- SSH keys (.pem/.ppk) are user specific


- Security groups is specific to VPC

Create an instance in the public subnet


- Keypairs are specific to users to a region
- Security group is specific to VPC

After logging into the bastian host


sudo yum -y update
sudo yum -y install telnet
curl http://<private-ip>

SSH Connect to instance in the private subnet from the instance in the public
subnet
copy the ssh-keypair to the instance in the public subnet
Format - scp -i <pem-file> <pem-file> ec2-user@<public-ip_ec2>:/tmp/
example - scp -i "ec2-connect-new.pem" soure_file destinationlocaiton
example:
scp -i "aws-ec2-connect.pem" aws-ec2-connect.pem [email protected]
1.compute.amazonaws.com:/tmp/

ssh into the public instance


navigate to the /tmp directory
chmod 400 <keypair>.pem
ssh ec2-user@private-ip

Create a nat gateway in the public subnet


Pricing - https://aws.amazon.com/vpc/pricing/
Allocate the elastic IP address to the NAT gateway
Associate the nat gateway to the private route with 0.0.0.0/0

For connnecting to internet from the instance in the private subnet


- Create a NAT gateway
- Allocate an elastic IP address to the NAT gateway
- The NAT gateway should be present in the public subnet (any public subnet)
- Add the route in the private route table
0.0.0.0 -> NAT Gateway

Deleting the VPC

- Terminate all the running instances


- Dissasociate subnets from both the route tables.
- Delete the route entries for internet gateway and nat gateway in both the route
table entries
- Delete the private and public route tables.
- Delete the NAT Gateway
- Release the Elastic IP address
- As long as the elastic IP address is associated with a running EC2
instance/NAT gateway,it will not be charged
- If the Elastic IP address is not associated with any EC2 instance/NAT, it will
be charged
- Detach the internet gateway fom the VPC and then delete the Internet gateway
- Delete the subnets
- Delete the VPC

In case you have deleted the default VPC by mistake, follow the below instructions
to create a default VPC
https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html

Infrastructure as a Code (IaC)


- Write the code which provisions the infrastrucuture for you
- https://s3.us-west-2.amazonaws.com/cloudformation-templates-us-west-2/
EC2InstanceWithSecurityGroupSample.template
- Detects the drift between the code and the actual infrastructure
- Repeatable, Reusable, discoverable
- Can be version controlled
- Declarative method to create resources
- Can perform rollback in case of any exceptions
- Options
- Cloudformation
- AWS proprietory tool
- Can be written using json/yaml file
- Integrates natively with other AWS services
- Terraform
- Generic and can be used across other cloud providers
- Uses Hashicorp language
- No vendor lockin

Setting up the VPC with Terraform

- Login to the ec2 instance


- Install git - sudo yum -y install git
- clone the git repository - git clone https://gitlab.com/31-10-jpmc-k8s/eks-
terraform.git
- navigate to the directory - cd eks-terraform
- Steps to install terraform -
https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
- sudo yum install -y yum-utils
- sudo yum-config-manager --add-repo
https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
- sudo yum -y install terraform
- terraform version
- aws configure --profile terraform
- enter the access key
- enter the access secret
- enter the region - ap-south-1
- enter the output format - json
- Initialize the terraform - terraform init
- Run the plan command - terraform plan
- Execute the terraform script - terraform apply
- Destroy the resource - terraform destroy --auto-approve
- Codebase - https://gitlab.com/classpathio-terraform

Day-2
1. aws configure
enter the access key
enter the secret key
enter the region ap-south-1
enter the output format json
enter the command aws s3 ls
AWS cli-reference - https://docs.aws.amazon.com/cli/latest/reference/s3api/create-
bucket.html

AWS S3 list bucket command


aws s3api list-buckets

aws s3 create bucket command


aws s3api create-bucket --bucket my-bucket --region ap-south-1 --create-bucket-
configuration LocationConstraint=ap-south-1

navigate to the home directory on the ec2 instance


cd ~/
cd .aws

EBS-volume
Bucket policy

Policy

{
# version - fixed
"Version": "2012-10-17",
"Statement": [
{
# unique name for the statement
"Sid": "read bucket",

#Principal - IAM user/group/role/arn


"Principal": "*",
# allow/deny
"Effect": "Allow",
# verbs on resources
"Action": [
"s3:*"
],
# on which resource
"Resource": "arn:aws:s3:::classpathio-aws-training/*"

}
]
}

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "read bucket",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::classpathio-aws-training"
}
]
}

Lab - Lambda function - Automated snapshot creation using Lambda


----------------------

1. create a policy - pradeep-aws-ec2-snapshot-policy

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:CreateTags",
"ec2:ModifySnapshotAttribute",
"ec2:ResetSnapshotAttribute"
],
"Resource": [
"*"
]
}
]
}

2. Create a role called pradeep-ec2-snapshot-role with the above policy

3. Create a lambda function with the above role created

# Backup all in-use volumes in all regions - pradeep-ec2-snapshot-lambda

import boto3

def lambda_handler(event, context):


ec2 = boto3.client('ec2')

# Get list of regions


regions = ec2.describe_regions().get('Regions',[] )

# Iterate over regions


for region in regions:
print ("Checking region %s " % region['RegionName'])
reg=region['RegionName']

# Connect to region
ec2 = boto3.client('ec2', region_name=reg)

# Get all in-use volumes in all regions


result = ec2.describe_volumes( Filters=[{'Name': 'status', 'Values': ['in-
use']}])

for volume in result['Volumes']:


print ("Backing up %s in %s" % (volume['VolumeId'],
volume['AvailabilityZone']))

# Create snapshot
result =
ec2.create_snapshot(VolumeId=volume['VolumeId'],Description='Created by Tanmaya\'s
Lambda backup function ebs-snapshots')

# Get snapshot resource


ec2resource = boto3.resource('ec2', region_name=reg)
snapshot = ec2resource.Snapshot(result['SnapshotId'])

volumename = 'pradeep-lambda'

# Find name tag for volume if it exists


if 'Tags' in volume:
for tags in volume['Tags']:
if tags["Key"] == 'Name':
volumename = tags["Value"]

# Add volume name to snapshot for easier identification


snapshot.create_tags(Tags=[{'Key': 'Name','Value': volumename}])

Delete the snapshot


Updated Code to delete Snapshots
# Delete snapshots older than retention period

import boto3
from botocore.exceptions import ClientError

from datetime import datetime,timedelta

def delete_snapshot(snapshot_id, reg):


print ("Deleting snapshot %s " % (snapshot_id))
try:
ec2resource = boto3.resource('ec2', region_name=reg)
snapshot = ec2resource.Snapshot(snapshot_id)
snapshot.delete()
except ClientError as e:
print ("Caught exception: %s" % e)

return

def lambda_handler(event, context):

# Get current timestamp in UTC


now = datetime.now()
# AWS Account ID
account_id = ''

# Create EC2 client


ec2 = boto3.client('ec2')

# Get list of regions


regions = ec2.describe_regions().get('Regions',[] )

# Iterate over regions


for region in regions:
print ("Checking region %s " % region['RegionName'])
reg=region['RegionName']

# Connect to region
ec2 = boto3.client('ec2', region_name=reg)

# Filtering by snapshot timestamp comparison is not supported


# So we grab all snapshot id's
result = ec2.describe_snapshots( OwnerIds=[account_id] )

for snapshot in result['Snapshots']:


#print ("Snapshot is older than configured retention of %d days" %
(retention_days))
delete_snapshot(snapshot['SnapshotId'], reg)

S3 bucket
Resource based policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::classpathio-test-bucket/*"
}
]
}

Docker and Containers

Useful repositories
- https://gitlab.com/classpath-docker/docker-lab
Steps to install docker - https://docs.docker.com/engine/install/
Steps to install docker on AWS -
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-
image.html
- sudo yum -y update
- sudo yum install -y docker
- sudo systemctl start docker
- sudo systemctl enable docker
- sudo usermod -a -G docker ec2-user
- exit
- reconnect to the console
- docker info

Installing Docker on Centos

yum install -y yum-utils


yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin

docker installation on AWS Linux


https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-
image.html

systemctl start docker


docker info
docker images
docker container ls
docker pull hello-world
docker images

docker container ls

docker container run hello-world


docker container run --name my-first-container hello-world
docker container ls
docker container ls --all

docker pull nginx


docker container run nginx
docker images
docker container run -detached(background) host-port:container-port image-name
docker container run -d -p 80:80 nginx
docker container run -d -p 81:80 nginx
docker container ls

curl http://localhost
curl http://localhost:81
docker container run -d -p 80:80 nginx
docker container ls
curl http://localhost
command to login to the container - docker exec -it <container-id> /bin/bash
cd /usr/share/nginx/html
echo "hello-world" > index.html
exit
command to stop the container - docker container stop <container-id>
command to start the container - docker container start <container-id>
command to restart the container - docker container restart <container-id>
command to delete docker container rm <container-id>

docker container run -d -p 8080:8080 classpathio/order-microservice


docker container ls
docker logs <container-id>
curl http://localhost:8080/api/v1/orders
comprehensive commands
sudo yum install -y git
git clone https://gitlab.com/classpath-docker/docker-lab.git
cd docker-lab
cd 01-hello-world
docker build -t helloworld .
docker images
docker container run helloworld

cd 02-hello-web
docker build -t mynginx .
docker images
docker container run -d -p 80:80 mynginx

cd 03-express-crud-app
docker build -t orders-api .
docker images
docker container run -d -p 3000:3000 orders-api
curl http://localhost:3000/

Multi stage dockerfile - Spring boot application

git clone https://gitlab.com/12-12-vmware-microservices/orders-microservice.git


inside the order-microservice folder run the command - git pull origin master
docker build -t order-microservice .
docker images
docker container run -d -p 8080:8080 order-microservice
docker container ls
docker exec -it <container-id> /bin/bash
curl http://localhost:8080/api/v1/orders
Edit the security group of the EC2 instance and add a inbound rule - port number
(8080), source - 0.0.0.0/0
In the browser, hit the url http://<ec2-instance-ip>:8080/api/v1/orders

- To pull an image from the public repository you do not need credentials
- To pull an image from the private repository you need the credentials
- To push an image to public/private repository you need the credentials

docker login
username: classpathio
password: Welcome44

The format of the docker image is <docker-registry>/<docker-repo-name>/<image-


name>:<version>
index.docker.io/classpathio/order-
microservice:latest
registry.cisco.com/project-name/app-name:version

078711992964.dkr.ecr.ap-south-1.amazonaws.com/order-microservice
docker tag order-microservice classpathio/<yourname>-order-microservice:latest
docker push classpathio/<yourname>-order-microservice:latest

docker tag order-microservice classpathio/<yourname>-order-microservice:2.0.0


docker push classpathio/<yourname>-order-microservice:2.0.0

docker-registry: docker.io - default


Image version : default value - latest

docker image name - order-microservice


image is a alias/reference to the image-id
we can tag names to the image-id
docker tag <source-image> <destination-image>
docker tag order-microservice 660817125715.dkr.ecr.ap-south-1.amazonaws.com/order-
microservice

To push docker images to the ECR,


aws ecr get-login-password --region ap-south-1 | docker login --username APP-ID --
password-stdin 831955480324.dkr.ecr.ap-south-1.amazonaws.com

aws configure
access key - AKIA4DNDJCFCAAHB5G
secret key - k4OynFZhmMubn6Ksb7vmGPhpGvfEK35v
region - ap-south-1
outputformat - json

- Microsoft Azure - Azure Container Registry (ACR)


- ACR allows you to store Docker and Open Container Initiative (OCI) images for
all types of container deployments.
- Azure Pipelines to automatically build and patch containers, and then push
them to Azure Container Registry.

- Google Cloud Platform: Google Container Registry (GCR)


- Google Container Registry is a private Docker storage system on Google Cloud
Platform.
- It hosts your images in a high availability and scalable architecture, allowing
you to reliably deploy your images to your Kubernetes clusters or other cloud
environments.

ECS - Elastic Container Service


- propritery solution to manage containers on AWS
- Create a cluster
- two options
- The containers will run on EC2 instances as worker nodes which are managed
by users
- The containers will run on Infrastructure managed by AWS. Also referred to
as Serverless
- Create a Task definition
- Serves as a blueprint of the container
- image url, networking, monitoring, port mapping are provided as input
- Deploy the task definition
- Task - Containers will be run as tasks which are managed by ECS
- Service - Containers are run as services managed by ECS

Kubernetes
Setting up Kubernetes cluster
- Cluster contains 2 components
- Control plane
- Data plane
- Control plane is managed by the platform
- AWS - EKS - Elastic Kubernetes Service
- Azure - AKS - Azure Kubernetes Service
- GCP - GKE - Google Kubernetes Engine
= On Prem
- To Setup Kubernetes
- AWS Management Console
- Kops
- eksctl
- Kubeadm
- git clone https://gitlab.com/12-12-vmware-microservices/kubernetes-manifests.git
- mkdir -p ~/.kube
-

- Install the kubectl client - https://kubernetes.io/docs/tasks/tools


- Verify the installation - kubectl version
- Install the AWS cli - https://docs.aws.amazon.com/cli/latest/userguide/getting-
started-install.html
- Verify the installation
aws configure --profile terraform
aws_access_key_id = AKIA4DNDJF4CIVLSF2FY
aws_secret_access_key = 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
enter the region- ap-south-1
enter the output format- json

Connecting to the K8s cluster from EC2 instance to Digital ocean


cd ~/
git clone https://gitlab.com/12-12-vmware-microservices/orders-microservice.git
cd orders-microservice
mkdir -p ~/.kube
cp config ~/.kube/
kubectl version

Output:
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.0

enter the command aws s3 ls

Fetching the config file


- Owner of EKS will run the command
- The command updates the config file under ~/.kube/ location
- aws eks update-kubeconfig --region ap-south-1 --name eks-cluster --profile
terraform
z7jEgSM8cbaG2MCiyw6SaaWAXjizrzHgCxWdDobB
Deployment
- git clone https://gitlab.com/snippets/3606767.git
- cd 3606767
- ls
- mv snippetfile1.txt order-microservice.yaml
- kubectl apply -f order-microservice.yaml
- kubectl get nodes
- kubectl create ns <your-name>-ns
- kubectl config set-context --current --namespace=<your-name>-ns
- Create the Kubernetes resources - kubectl apply -f <file-name>.yaml
- Resources
- Pod, Service, Ingress, Configmap, secrets
- kubectl get rs
- kubectl get pods
- kubectl delete po --all
- kubectl get pods

- Deleted the resources


- Delete the replica-sets - kubectl delete rs order-microservice-rs
- Delete the deployment - kubectl delete deploy --all
Automation
- Creation of Infrastructure
- Declarative define the resources to be created
- Detect the drift between the code and the actual infrastructure
- Version controlled
- Repeated, Reusable, Discoverable
- Ex - Cloudformation, Terraform
- Setting up Terraform
- https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
Installation of terraform on Amazon-Linux
- sudo yum install -y yum-utils
- sudo yum-config-manager --add-repo
https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
- sudo yum -y install terraform
- terraform -help
- cd ~/
- Download the codebase - git clone
https://gitlab.com/classpathio-terraform/terraform.git
- Navigate to the directory - cd terraform/01-aws-ec2-instance
- terraform init
- terrafom plan
- terrafrom apply
- terraform destroy

CloudFormation
- Setup the template
- Upload the template to S3
https://s3.us-west-2.amazonaws.com/cloudformation-templates-us-west-2/
EC2InstanceWithSecurityGroupSample.template
- Create a stack with the Template
- Run the stack

In the context of security domain


- Principal - Logged in Entity
- User
- Application
- Entity
- Lamdba
- EC2

sudo yum -y install git


git clone https://gitlab.com/classpath-docker/docker-lab.git
cd docker-lab
cd 04-multi-stage/
docker build -t items-api .
https://kubernetes.io/docs/tasks/tools/

Local zones
https://aws.amazon.com/about-aws/global-infrastructure/localzones/locations/

Day-2
AWS CLI reference
https://docs.aws.amazon.com/cli/latest/reference/ec2

aws ec2 run-instances --image-id ami-01216e7612243e0ef --instance-type t2.micro --


key-name test-vm --region ap-south-1
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-
services-us-west-2.html#w2ab1c33c58c13c17
https://s3.us-west-2.amazonaws.com/cloudformation-templates-us-west-2/
EC2InstanceWithSecurityGroupSample.template

https://gitlab.com/26-09-microservices/order-microservices
sudo yum -y install git
git clone https://gitlab.com/26-09-microservices/order-microservices.git

Kubernetes
https://gitlab.com/kubernetes-workshop2

AWS Solution architect certification


https://skillcertpro.com/
https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c03/
https://www.aws.training/certification/?cta=saatopbanner&refid=662aeb66-1ee5-4842-
b706-60c6a1b4f187

Helm commands
helm version
helm repo add stable https://charts.helm.sh/stable
helm repo update
helm repo list
helm install demowp stabe/wordpress
helm delete demowp

helm search repo mysql


helm search repo nginx

helm fetch stable/jenkins

tar -xvf jenkins-2.5.4.tgz

cd jenkins

helm repo index ./example

To remove the repo - helm repo remove stable

Creating a custom chart


helm create order-microservice

Installing Terraform on Amazon Linux-2

Install Terraform - https://developer.hashicorp.com/terraform/tutorials/aws-get-


started/install-cli
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo
https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum -y install terraform
terraform -version

Install git - sudo yum -y install git


Clone the Git repo - git clone
https://gitlab.com/classpathio-terraform/terraform.git
aws configure --profile terraform
enter access key - <access-key>

enter access secret - <access-secret-key>


region: ap-south-1
format: json

Running Terraform
terraform init
terraform plan

https://kubernetes.io/docs/tasks/tools/
kubectl version
- client version
- server version - null

SQS - Java configuration

Clone the repository - https://gitlab.com/08-03-23-cisco-aws-dev/java-sqs

Dependencies
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.106</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-sqs</artifactId>
</dependency>

---
AWS Code commit
- Current repo - https://gitlab.com/12-12-vmware-microservices/orders-
microservice.git
Run the command - git clone https://gitlab.com/12-12-vmware-microservices/orders-
microservice.git
- Navigate inside the order-microservice directory
- git remote show origin
- To pull code from public repo you do not need credentials
- To pull/push code to/from the private repo, you need to be authenticated and
authorized
- Two types of authentication
- Git
- Uses Openssh protocol for authentication
- More secure
- can be used for automation
- HTTP
- Uses username/password for authentication
- Needs user to enter username and password
- Run the ssh-keygen command
- It will generate both public and private keys
- public key name - id_rsa.pub
- private key name - id_rsa
- On windows - C:\Users\<your-name>\.ssh directory
- On Linux/Mac - ~/.ssh directory
- Remove the link to the old repo using the command - git remote remove origin
- Add the new link to the new repo on AWS Code-commit
git remote add origin >git remote add origin ssh://git-codecommit.ap-south-
1.amazonaws.com/v1/repos/order-microservice
- Run the command - cd ~/.ssh
- Open the config file under ~/.aws/config and add the below entry
Host git-codecommit.*.amazonaws.com
User <APKA4DNDJF4CP4RIHLVC - key under your IAM credentials screen>
IdentityFile ~/.ssh/id_rsa

Setting up CI
- Traditional way is to set up the Jenkins server
- Jenkins is an orchestrator which performs
- clone -> compile -> test -> package -> install -> deploy
- The server should be configure with Jobs, patch the server
- There should be communication between dev and ops

Pipeline as a Service - to overcome the limitations of Centralized build servers


- Write the build instructions with code
- The instructions are written in the file and the file is also committed to the
repo
- All changes to the file is going through the change management process
- Trigger the orchestrator in case of commit
- The providers will create compute resource to run the build
- Once the job/process is complete, the compute resources are deleted
- Reduces the dependency on other teams

Infrastructure as a Code
- Declarative way to create resources
- Rollback and exception handling
- Detect the drift
- Repetable, Reusable, Version controlled
- Ex: Terraform, CloudFormation

CloudFormation
- Sample templates by service
- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-
services-us-west-2.html
- Use template is ready
https://s3.us-west-2.amazonaws.com/cloudformation-templates-us-west-2/
VPC_Single_Instance_In_Subnet.template

Terraform
- By Hashicorp
- Opensource and vendor agnostic
- https://gitlab.com/classpathio-terraform/terraform
- EKS using terraform -

For Developer role


- Create a IAM policy with Admin access to EKS cluster - AmazonEKSAdminPolicy
- To view the nodes under the configuration tab in the management console
- Create an IAM role - eks-admin and attach the above policy
- Run the command aws iam get-role --role-name eks-admin --profile terraform
- Any IAM user from the account will be able to assume the role provided they have
the appropriate policy in place
- In the config file under ~/.aws directory, add the below lines
[profile <your-name>]
role_arn: arn:aws:iam::831955480324:role/aws-dev-trainee-role
source_profile = <your-name>
- Run the command - aws eks --region ap-south-1 update-kubeconfig --name eks-
cluster --profile <your-name>
- View the config file under ~/.kube directory
- Run the command - kubectl get nodes
- kubectl create ns <your-name>-ns
- kubectl config set-context --current --namespace=<your-name>-ns
- cd kubernetes-manifests
- kubectl apply -f k8s-order-microservice.yaml
- kubectl get pods

Create a IAM policy (AmazonEKSAssumePolicy) to assume the above role eks-admin

- Create an IAM user and attach the above policy


- Run the command aws configure --profile eks-admin

- Verify if the above user can assume the role


- aws sts assume-role --role-arn arn:aws:iam::831955480324:role/eks-admin --role-
session-name manager-session --profile manager

- Switch the context to the user who created the eks cluster
- aws eks --region ap-south-1 update-kubeconfig --name eks-cluster

ECS - Elastic Container registry


- Native container orchestrator from AWS
- Flexibility in deploying container workload

Serverless-
- AWS manages the infrastructure
- Reduce Operational costs
- Org can focus on the primary domain
- Scalability/Availability/Resilience/Security
- Vendor Lockin
- Serverless Portfolio
- Compute
- Lambda, SQS, SNS, StepFunctions
- Storage
- S3, EFS
- Services
- DynamoDB, Cognito, API Gateway

SQS - Simple Queue Service


- Managed service
- Specific to Region
- Decouple producers and consumers
- Acts as a buffer to store messages
- One to One
- It is the responsibility of the consumer to delete the message
- Asynchronous
- Pull model - Consumer should poll for the messages
- The max size of the payload is 256 KB
- If the message is not processed, the message can be routed to a DLQ (Dead letter
queue)
- Delivery semantics
- Standard
- Best order gurantee
- At least once
- No data loss
- Consumers should process duplicates
- Unlimited API requests
- FIFO
- Order is guarenteed
- Exactly once
- Limited throughput
- If the payload is not processed, the messages will be stored in the DLQ
Visibility timeout
- https://gitlab.com/19-06-23-cisco-aws-dev

Simple Notification Service - SNS


- Managed service
- Specific to Region
- Publisher subscriber pattern
- The publisher publishes the message and the subscribers will receive the payload
- There is no buffer
- One to Many
- Asynchronous
- Push model
- Types of Subscribers
- Email, Text, SQS, Lambda

Databases
- Regional services in AWS
- Relational
- Data is split into multiple tables
- The data is stored in rows and columns
- The data integrity is managed at the DB
- Apply normalization technique to eliminate anamolies (Creation, Updation,
deletion)
- Vertical scaling
- Strict schema to maintain integrity
- Ex: MySql, Oracle, Postgresql, Sybase
- AWS - Relational Database Service
- Umbrella project
- Compatible with MySQL, PostgreSql, Oracle

- NoSQL
- Databases that are Not Only SQL

- Different types of databases


- Schema less
- Document database - DocumentDB
- Key value database - DynamoDB
- Graph database - Neptune
- Time series database - Timestream
- Ledge database - QLDB
- Columnar database - KeySpaces
- In Memory - MemoryDB

- Data is redundant and no normalization


- Horizontal scalability
Commands to install mysql client
- sudo wget https://dev.mysql.com/get/mysql80-community-release-el9-1.noarch.rpm
- sudo dnf install mysql80-community-release-el9-1.noarch.rpm
- dnf repolist enabled | grep "mysql.*-community.*"
- sudo dnf install mysql-community-server
- sudo mysql -h <host-name> -u admin -p

- sudo dnf install -y mariadb105


- connect to the db - sudo mysql -h database-1.c4xbzwxvwwsz.ap-south-
1.rds.amazonaws.com -u admin -p

Script to install on Database


show databases;
create database <your-name>_db;
use <your-name>_db;
show tables;

-- creating the tables


create table employees (
emp_id bigint primary key auto_increment,
name varchar(40) not null,
email varchar(50) not null,
dob date not null
);
-- inserting the values to the table
insert into employees (name, email, dob)
values ('harish', '[email protected]', '1996-10-10');

-- view the data in the table

select * from employees;


drop database db_test;

DynamoDB
- Serverless database
- NoSQL database
- Key value database
- No fixed schema
- Can handle 10 trillion messages per day
- Supports both strong and eventual consistencies
- Data is partition using the partition key
- Optional sort key can be provided in which case, partition key and sort key acts
as a composite key
- Unlike traditional Relation databases, there is not persistent connection with
the database

Java codebase to perform CRUD operations on DynamoDB


- https://gitlab.com/19-06-23-cisco-aws-dev/aws-dynamodb-java

OAuth 2.0

Security
- Security should be implemented using Defence in depth strategy
Levels of security
- Data
- Data should be encrypted at rest
- Use strong encryption alogorithm - AES-256
- Do not store sensitive information in the log files
- Use one-way hash functions to store sensitive information in the database
- System
- Rule of minimum previligous
- Min access to all the resources
- No unnecessary applications/softwares
- Continous patching and updates for vulnarability
- Vulnarability scanning of docker images, VMs etc
- No root access to the machines
- Network
- Only the allowed ports should be open
- Seggregate private subnets and public subnets
- Setup firewalls, security groups
- Encryption at transit, TLS-1.2, HTTPS, certificates
- Ratelimitting, timelimitting
- Application
- Authentication
- Whether you are the same person, whom you claim to be
- HTTP - 401 - UnAuthorized
- Authorization
- Do you have necessary priviligous to acees the resources
- HTTP - 403 - Forbidden
- Even if one of the layers is compromised, the next line of defence should be
even more stonger
- The strength of a chain, depends on the strength of its weakest link
- If there is a vulnaribility, then the vulnaribility WILL be exploited by
Advaisary

OAuth 2.0
- Delegation based Security framework
- The PRINCIPAL/RESOURCE_OWNER
will
DELEGATE PART_OF_RESPOSIBILITY - (read,fetch)
to a
TRUSTED_APPLICATION - (HDFC/ICICI bank)
to
PERFORM_LIMITED_ACTION - (read documents, fetch documents)
on his
BEHALF

Vocabulary from Security domain


- Anonymous user - An entity who is not yet authenicated
- Entity
- User
- Machine
- Program
- Service
- Principal
- Authenticated entity is referred to as Principal

OAuth 2.0
- It is a framework
- Grant flows depending on the type of client
- Different types of clients are supported
- Public trusted clients - (End users)
- Backend application - Deploy your applications on servers
ex: Java, Python, Nodejs
- Store confidential information
- Back channel
- Secure
- Front end applications - Mobile apps, SPA
ex: Angular/React/JS
- Front channels
- Cannot store confidential information
- insecure

- Private (No user, Machine to machine communication)


- application/service - application/service communication
- back channel
- secure

Grant types
- Authorization code
- Backend public client
- Proof of Key Exchange - PKCE
- Front end public client
- Client Credentials
- service to service communication
- microservice to microservice communication

OAuth 2.0 actors


- Resource (Protected)
- API's
- Orders api
- Inventory api
- Resource Owner
- End Users
- Client application
- will be building a Spring Boot application (backend) - Auth code grant flow
- Authorization Server
- Out of the box implementation - OKTA
- Options
Social
Google
Facebook
Github
IDP - Enterprise hosted solutions
OKTA
- Provides Identity and management solutions
- Developer account is free to use
WSO2
Auth0
Cognito
KeyCloak - Open source implementation
- Can be self hosted

OAuth2 Auth Code workflow


- https://developer.okta.com/docs/guides/implement-grant-type/authcode/main/

Step-0 - Onboarding process


- Client application registers with the Auth server
Onboarding process
input -> redirect url / callback-url
http://localhost:8555/authorization-code/callback
output -> client-id, client-secret
public confidential
client-id - 0oaencvv1nJNwQEEB5d7
client-secret(sensitive-information) DehMoTrVmwApbj_MvhQq5-
muFcU_U8zlqBEIUPJTtgEJrJCUZlmHn70aNw3sdeNe

Step-1
- Auth server exposes a metadata url
- Also referred as well known url
- Public endpoint
Okta- https://dev-7858070.okta.com/oauth2/default/.well-known/oauth-
authorization-server
Cognito- https://cognito-idp.ap-south-1.amazonaws.com/ap-south-
1_zY1QIrdPp/.well-known/openid-configuration

Details:
issuer: https://dev-7858070.okta.com/oauth2/default,
https://cognito-idp.ap-south-1.amazonaws.com/ap-south-1_zY1QIrdPp

authorization_endpoint:
"https://dev-7858070.okta.com/oauth2/default/v1/authorize"
https://classpath.auth.ap-south-
1.amazoncognito.com/oauth2/authorize,
token_endpoint: "https://dev-7858070.okta.com/oauth2/default/v1/token"

https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/token

registration_endpoint: "https://dev-7858070.okta.com/oauth2/v1/clients"
jwks_uri: "https://dev-7858070.okta.com/oauth2/default/v1/keys"

Anonymous user try to access the client application

Step-2
Redirect the anonymous user to the Authorize endpoint of the Auth server

https://dev-7858070.okta.com/oauth2/default/v1/authorize
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/authorize

Query parameters
client_id - 0oaencvv1nJNwQEEB5d7
response_type - code - fixed
scope - openid - fixed
redirect_uri http://zoho-app/authorization-code/callback
state - 9fe74sdf91-346d-4b9b-8884-c2e59c2fcce2

Construct the url


https://dev-7858070.okta.com/oauth2/default/v1/authorize?
client_id=0oaencvv1nJNwQEEB5d7&response_type=code&redirect_uri=http%3A%2F%2Fzoho-
app%2Fauthorization-code%2Fcallback&scope=openid&state=975733f8-f1c6-4eeb-918b
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/authorize?
client_id=71boupohni0jfa1g55nvlved0s&response_type=code&redirect_uri=https%3A%2F
%2Flocalhost%3A8555%2Fcallback%2Furl&scope=openid&state=975733f8-f1c6-4eeb-918b
https://accounts.google.com/o/oauth2/auth/oauthchooseaccount?
client_id=641273190747.apps.googleusercontent.com&scope=email
%20profile&redirect_uri=https%3A%2F%2Faccounts.zoho.in%2Faccounts
%2Foauthcallback&response_type=code&state=3b23d7caea4153d7c26721826f789416d719c5e5c
dbe1fb7eff7a4c75e6472fb2c17a937580c354a74669cf1bdede5c5dbc57ef8c57e201cc56efc9dd75b
9c36e8f8d78ebf70706aa78568f06b8d7ae1e8044ea81617bab8d206e8d1c1d3a2a804b2dc4eb742a30
376e2a7e3548af2000ba28b5dff87ea2582a550ed5e01113fd714cd2d64bf0aaa05bd08e1ae91362e&p
rompt=select_account&service=lso&o2v=1&theme=glif&flowName=GeneralOAuthFlow

http://zoho-app/authorization-code/callback?code=X9OYhYxp47-
xPOEH1vqoej92uaQcFssWupNMEZVRLAM&state=975733f8-f1c6-4eeb-918b
authorization code - 656a8038-188a-43f9-8c4c-334b561e2528&state=975733f8-f1c6-
4eeb-918b

Step-3
- Auth server will authenticate the user
- If the authentication is successful, then seeks confirmation to authorize
- If Authorized, will return back the auth_code in the response
http://localhost:8555/authorization-code/callback?code=Hb9sfCcUtmEDkPOcThlDMGCcV-
ix79P7U4PTcdW4Fz4&state=975733f8-f1c6-4eeb-918b-9bd8f9d86194 - auth code = P-
PYO8IE6SQo4ndejfD_UQ38e0h_ePFlF14Pw3kZxE0
auth code - 096ab6cf-0d9a-4771-9815-751ac56cb009
- validity is 5 minutes
- one time use
- auth code is not secure
- it is transimitted over an untrusted network

Step-4
- The client applications( Backend application) uses the auth code
- Exchanges the auth code with the access token using the POST request with the
token endpoing
- POST method
- Endpoint - token endpoint
https://dev-7858070.okta.com/oauth2/default/v1/token
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/token
- Body
grant_type - authorization_code
redirect_uri - http://localhost:8555/authorization-code/callback
code - _xOH7li8lRW13YZzKGt-WdgWwPqcCKh-lLdA-V7VeMA
- Basic authentication
client-id - 322css5uf7s76vls7030cue2vb
client-secret - lstn0tb7f87c1c11dboptlmegqpcp8los4f17uue8oleif7tjqk
Response

{
"id_token":
"eyJraWQiOiJBRFNHSElPSnU2U0UwTlhicmxyXC9TZDZ3NmVpSWNPbFwvYzdJUWdiRHpKeVE9IiwiYWxnIj
oiUlMyNTYifQ.eyJhdF9oYXNoIjoiWDlzRC1NTmJlUk9GQ3ZjZDc4VnUzdyIsInN1YiI6ImJkM2EyNDE4LT
QwNmItNDhmMy05ZTVjLTc3MjIzY2Y5MmZlZiIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwc
zpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJcmRQ
cCIsImNvZ25pdG86dXNlcm5hbWUiOiJwcmFkZWVwIiwib3JpZ2luX2p0aSI6ImZlYTI3OTQxLWQ5ODctNDJ
lNi1hNmQ0LTY4YjI1MmRhM2Y4MSIsImF1ZCI6IjMyMmNzczV1ZjdzNzZ2bHM3MDMwY3VlMnZiIiwidG9rZW
5fdXNlIjoiaWQiLCJhdXRoX3RpbWUiOjE2OTkzNTMwNDAsImV4cCI6MTY5OTM1NjY0MCwiaWF0IjoxNjk5M
zUzMDQwLCJqdGkiOiI5NWY0MTRmZS0yZmFmLTRjOGUtOGJmYi1hNDljMmY5MWVjZWUiLCJlbWFpbCI6InBy
YWRlZXAua3VtYXI0NEBnbWFpbC5jb20ifQ.bCdAiIuLhIaP-
HKZAHMkaJxuLFH5rKhKyQJ_MYW_0V3GRKcQ1Jr5Wi0pFozU3sdiG4M3jEHMo5IgtLvQ1sJ4HDabyM2D3A_6
9f4TuCr_D2UCcK0TTrIy0tiZDg32u799Xf4_F5nXKatHO_uWKYx5o446RbNbZl5Yfiu7nSidxUjk4e0Dhah
hvVXfuSuIyE-vDgCfKtnqeHt77XobfX4ylA-ldndB0iQU5pYDdEf18XIHmJQwulZ1Y4yNc-
AvSBpLXG3tWDTE_x4QutRzDhwkj0Wjzk8js6mzKwK9zniKaae9fR7qMgaZ5MPyV9QPzyuRrJlPsy7AmbwOf
7pcD9r7Ww",
"access_token":
"eyJraWQiOiJPd1ZTeGVlZlloK01mTUNuYTBrSGEwYjVxTWFxTmErbmtSazdQakVibWFVPSIsImFsZyI6Il
JTMjU2In0.eyJzdWIiOiJiZDNhMjQxOC00MDZiLTQ4ZjMtOWU1Yy03NzIyM2NmOTJmZWYiLCJpc3MiOiJod
HRwczpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJ
cmRQcCIsInZlcnNpb24iOjIsImNsaWVudF9pZCI6IjMyMmNzczV1ZjdzNzZ2bHM3MDMwY3VlMnZiIiwib3J
pZ2luX2p0aSI6ImZlYTI3OTQxLWQ5ODctNDJlNi1hNmQ0LTY4YjI1MmRhM2Y4MSIsInRva2VuX3VzZSI6Im
FjY2VzcyIsInNjb3BlIjoib3BlbmlkIiwiYXV0aF90aW1lIjoxNjk5MzUzMDQwLCJleHAiOjE2OTkzNTY2N
DAsImlhdCI6MTY5OTM1MzA0MCwianRpIjoiMzJhZDg3NWYtNzc4YS00MzRhLWJhMDgtNGI5YzlhOGY3MDc4
IiwidXNlcm5hbWUiOiJwcmFkZWVwIn0.fUQdlVFKsumK5WxDyEQLsg02hHdA5KvlHkEf3ySpWtK5U6OEPYc
WTRfS0Gd7ubqOEHN_Ns1gtecKGPNHJ3zATi_cSsPO6OnyaNWf9LlplS3UbWph70G8VGivz7Hy4A8fvm1ZZB
adEKNNKewnv8dBE5xarZ7lUxh0tt8LkxMEDhl7HInboAVBhD_UqhUmQ-
rLovm2JL35J72KhtuhW4k2QkeypWrJDo7Q4GGX7o5v7Pdrwkl2NRyY_AFNNCytnQl7KSUJ-
QTE8azgeqtcoqnzk8jNpDtAUximGCZuKLQPZWBDfvUoIEwypK9GY94AOjdRT6D1GD4gTL6AZygawxHaTg",
"refresh_token":
"eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiUlNBLU9BRVAifQ.XC4BvA3NIKkkSKqt5K0
CF7E6Cyl9Rs_x20GY83lV3gliAQDuAkYpztst7VssbLIxNu8jGmPQbaCZa3Pv3dBI6_mXXMU4_L8_TLWgp_
gAJ14StPXW5rJYMEb7lGvGr6FEGj5me1PvEVpe7ejrH7A-
Qvq8QhU8Zpwe9Pulp3TDcnBpJQDnc4VRbqzrIseLY5_HEsJ35UjBUIqzKEvaNbSoTvNyf3FdzAinlzl2K-
J2YEnXmsSHyT3bWKtdasMamYmQvMl8D0MRqk6MBnk5IxrE__4BndJm2J4ZrkSBffvdOnZy6RRacdq7ss-
hKtZLqzewlVrq1WR0hdJuwLshzUT8_A.53efmXHIGfjE-
Raf.bMLTOg5cCGDFidDspgJUe5PiaRhwCDwwVj-pwakdC1i-5tltXi4BvHW-Z_gcCDPXDlFL-
0EBNamqNstnSE7zd9DAef0lbtdSacApKKVwDxs6kMGkj68ai7NrLawaJHpIp6e8yJyheMDuEGdCn_qQblcG
PYAJ1nOpKo4ZAuHiiHfcCK53iU-JgYCxPuZtxJnHu_-mk-gNZtmBBce0g6s6qAIhYAQUrQNx5eGk-
FXlxSVYi0-L0wcTqWWWX1-
snCH74A0ke_EexgFwhcb8t6XYhTbZbIPA2xWpxAgkOsx1_2nlfMugS024tBhl4RVysjLH11p-3v_-
9AJmGht_5xvU_xyK06uB5_SgbYpeRHr29PSqecR1NLcQeFWjTh4whtygSYg31tKSgNHUd7UHhUHZAwCpzAr
82WJJNCPRhyk-IJQNIjn6kpbnB0bcb-f8n6_l-TIOVMdmLVjXSA3D0IwYwFq-
5HBDEcjhOJMHKYqugJMzc8ZumKmvzN4MwrEEeB8twDsSxneWrrqA3JjqRihDmC69tJGbWbSpBw8GqjFn675
eZm12drxkPDnW8348bAR1o14RB12aEwCpgZgiJ7kEsxAjUGak6q1S73V4tsalngIKtMO_7nkf8nn59dvPtw
76rx6tpdcjUlRNXiNl22YRSkqUuN7YvRET1UJvHJwvy10X7NNFUDVVmfvyU-
3pnhLzXHRVXQtXSxDIBScrY8aCMeT1lmOIE3EMir1jU1NZhEVb3tCcEiR1a91g_qc4exogBoO_2Ca9zolyF
7R5p-_gL6tTHnGF_XyO-
OUwlPkKTXpHaZxn9LedCBSh8PhZnfApsQQWYFdG1yyRyiuU3HfNXmiSRRYeHVtd075mjS6G8i1hQncy_IiU
hcLN1NavrKuD5CkRrTCjA9XHje-f7fkrQV3HU9jwY6SqhBaANmkbx85E-Ct4ut5As398Sx2EKc1xJGfn-
4cZcCn2VN_moayKWSHEuMGz8yYejNvhpGY2eK6AgNRjEiJwPYx_3NvBoHc-
Z1aQkG4pQAq2fUNmjIJ6f_NUacMKI6BxU-
p02SMFYLK473hnAGeK5CIM009i_MQex8xxNK6oDR2t8INP9tsAVhw8EpEyM0F9vfS0Wluvi26RO5m1vFQLV
FyzgZkePoieCwVAKr8jRIQ5IRH068PmSBegJJPA5mCxBw.GV5NA4LxOTcAGF5KgdixtA",
"expires_in": 3600,
"token_type": "Bearer"
}

JWT token
- Crytographically secure token
- Self contained token
- Not to store sensitive information
- It is a digital signature of user identiry
- short lived token
- Validate the token - https://jwt.io

Step-5
- The client application will send the token to the resource server in the
Authorization header
- The resource server will decode the token
- Extract the claims
- Allow/Deny the resource access
- 403 in case of forbidden usecas

OAuth2-client-application
codebase - git clone https://gitlab.com/10-10-vmware-microservices/ekart-oauth2-
client.git
https://gitlab.com/14-11-synechron-microservices/ekart-oauth2-client.git

dependency of oauth2-client
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-oauth2-client</artifactId>
</dependency>
start the server - port 8555
Invoke the endpoint - http://localhost:8555/api/userinfo

AWS Lamdba functions


- Run functions without dealing with infrastructure
- Upload the code and run it on AWS
- Supports many languages Java, Python, Node, Go
- Adhoc jobs - DB backup
- Cost effective - Pay per use (per invocation, duration and memory)
- Built in metrics and integrating with Cloudwatch
- Integrates with other AWS services (Secrets Manager, Parameter Store,
Cloudwatch, S3, API Gateway, SQS, SNS, Step Functions, DynamoDB)

How lambda functions work internally


- ARN is returned when the code is uploaded
- Can be invoked using the ARN
- Loadbalancer is automatically provisioned behind the scenes
- Lambda has a reseve pool of instances behing a load balancer

import boto3
import json

def lambda_handler(event, context):


queue_url = 'https://sqs.ap-south-1.amazonaws.com/831955480324/pradeep-message'

# create a sqs client


sqs = boto3.client('sqs');

# create a payload message to be sent


message_body = "Hello world from SQS"

response = sqs.send_message(
QueueUrl=queue_url,
MessageBody=message_body
)
# Print the response
print("Message sent to SQS with MessageId:", response['MessageId'])

DynamoDB CRUD using Lambda function

Save user
---
import json
import boto3

# Initialize the DynamoDB client


dynamodb = boto3.client('dynamodb')

def lambda_handler(event, context):

body = json.loads(event['body'])
id = body.get('id')
name = body.get('name')
city = body.get('city')

# Sample data to insert


data_to_insert = {
"id": id,
"name": name,
"city": city
}

try:
# Insert data into the DynamoDB table
response = dynamodb.put_item(
TableName='users', # Replace with your table name
Item={
'id': {'S': data_to_insert['id']},
'name': {'S': data_to_insert['name']},
'city': {'S': data_to_insert['city']}
}
)
return {
'statusCode': 200,
'body': json.dumps('Data inserted successfully')
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps(f'Error: {str(e)}')
}

Request:
{
"body": "{\"id\":\"12345\",\"name\":\"vinay\",\"city\":\"Hydrabad\"}"
}

List all users


---

import json
import boto3

# Initialize the DynamoDB client


dynamodb = boto3.client('dynamodb')

def lambda_handler(event, context):


try:
response = dynamodb.scan(
TableName='users'
)

# Extract the items from the response


items = response['Items']

# Convert the items to a JSON format


items_json = json.dumps(items, default=str)

return {
'statusCode': 200,
'body': items_json
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps(f'Error: {str(e)}')
}

Update users
---

import json
import boto3

# Initialize the DynamoDB client


dynamodb = boto3.client('dynamodb')

def lambda_handler(event, context):


try:
# Parse the incoming event JSON to get the product ID and updated
attributes
body = json.loads(event['body'])
user_id = body.get('id')
user_name = body.get('name')
updated_attributes = body.get('updatedAttributes', {})

if not user_id or not updated_attributes:


return {
'statusCode': 400,
'body': json.dumps('Invalid request. Provide user_id and
updatedAttributes.')
}
if not user_name or not updated_attributes:
return {
'statusCode': 400,
'body': json.dumps('Invalid request. Provide user_name and
updatedAttributes.')
}

# Update the product in the DynamoDB table


update_expression = 'SET ' + ', '.join([f'#{attr} = :{attr}' for attr in
updated_attributes.keys()])
expression_attribute_names = {f'#{attr}': attr for attr in
updated_attributes.keys()}
expression_attribute_values = {f':{attr}': {'S': updated_attributes[attr]}
for attr in updated_attributes.keys()}

response = dynamodb.update_item(
TableName='users', # Replace with your table name
Key={
'id': {'S': user_id},
'name': {'S': user_name}
},
UpdateExpression=update_expression,
ExpressionAttributeNames=expression_attribute_names,
ExpressionAttributeValues=expression_attribute_values,
ReturnValues='ALL_NEW'
)

# Return the updated product as a JSON response


updated_user = response['Attributes']
return {
'statusCode': 200,
'body': json.dumps(updated_user, default=str)
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps(f'Error: {str(e)}')
}

Format of input json file


{
"body": "{\"id\":\"12345\",\"name\":\"vinay\",\"updatedAttributes\":
{\"age\":\"22\",\"city\":\"Mumbai\"}}"
}

Delete user
---

import json
import boto3

# Initialize the DynamoDB client


dynamodb = boto3.client('dynamodb')

def lambda_handler(event, context):


try:
# Parse the incoming event JSON to get the product ID
body = json.loads(event['body'])
user_id = body.get('id')
user_name = body.get('name')

if not user_id:
return {
'statusCode': 400,
'body': json.dumps('Invalid request. Provide user id.')
}
if not user_name:
return {
'statusCode': 400,
'body': json.dumps('Invalid request. Provide user name.')
}

# Delete the user from the DynamoDB table


response = dynamodb.delete_item(
TableName='users', # Replace with your table name
Key={
'id': {'S': user_id},
'name': {'S': user_name},
}
)

# Check if the delete operation was successful


if response['ResponseMetadata']['HTTPStatusCode'] == 200:
return {
'statusCode': 200,
'body': json.dumps('User deleted successfully')
}
else:
return {
'statusCode': 500,
'body': json.dumps('Error deleting user')
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps(f'Error: {str(e)}')
}

Test event to delete the user with the id 12345


{
"body": "{\"id\":\"12345\",\"name\":\"harish\"}"
}

API Gateway

- Single entry point for all your backend applications


- Supports integration with Lambda, ALB, ECS
- Assign domain name
- Cache the response
- Aggregate the responses from multiple services
- Assign prefix
- Throttling, Ratelimit, Timelimit
- Monitization
- Authentication
- Managed serverice/Serverless

DynamoDB CRUD application and expose as REST API's


<your-name-items-rest-api>
---

const AWS = require("aws-sdk");

const dynamo = new AWS.DynamoDB.DocumentClient();

exports.handler = async (event, context) => {


let body;
let statusCode = 200;
const headers = {
"Content-Type": "application/json"
};

try {
switch (event.routeKey) {
case "DELETE /items/{id}":
await dynamo
.delete({
TableName: "items",
Key: {
id: event.pathParameters.id
}
})
.promise();
body = `Deleted item ${event.pathParameters.id}`;
break;
case "GET /items/{id}":
body = await dynamo
.get({
TableName: "items",
Key: {
id: event.pathParameters.id
}
})
.promise();
break;
case "GET /items":
body = await dynamo.scan({ TableName: "items" }).promise();
break;
case "PUT /items":
let requestJSON = JSON.parse(event.body);
await dynamo
.put({
TableName: "items",
Item: {
id: requestJSON.id,
price: requestJSON.price,
name: requestJSON.name
}
})
.promise();
body = `Put item ${requestJSON.id}`;
break;
default:
throw new Error(`Unsupported route: "${event.routeKey}"`);
}
} catch (err) {
statusCode = 400;
body = err.message;
} finally {
body = JSON.stringify(body);
}

return {
statusCode,
body,
headers
};
};

Payload for PUT items


{
"id": "233",
"price": 500,
"name": "Ipad"
}

Cognito metadata url - https://cognito-idp.ap-south-1.amazonaws.com/ap-south-


1_zY1QIrdPp/.well-known/openid-configuration

OAuth 2.0
Reference Video - https://www.youtube.com/watch?v=996OiexHze0
- It is a framework
- Grant flows depending on the type of client
- Different types of clients are supported
- Public trusted clients - (End users)
- Backend application - Deploy your applications on servers
ex: Java, Python, Nodejs
- Store confidential information
- Back channel
- Secure
- Front end applications - Mobile apps, SPA
ex: Angular/React/JS
- Front channels
- Cannot store confidential information
- insecure

- Private (No user, Machine to machine communication)


- application/service - application/service communication
- back channel
- secure

Grant types
- Authorization code
- Backend public client
- Proof of Key Exchange - PKCE
- Front end public client
- Client Credentials
- service to service communication
- microservice to microservice communication

OAuth 2.0 actors


- Resource (Protected)
- API's
- Orders api
- Inventory api
- Resource Owner
- End Users
- Client application
- will be building a Spring Boot application (backend) - Auth code grant flow
- Authorization Server
- Out of the box implementation - OKTA
- Options
Social
Google
Facebook
Github
IDP - Enterprise hosted solutions
OKTA
- Provides Identity and management solutions
- Developer account is free to use
WSO2
Auth0
KeyCloak - Open source implementation
- Can be self hosted

OAuth2 Auth Code workflow


- https://developer.okta.com/docs/guides/implement-grant-type/authcode/main/

Step-0 - Onboarding process


- Client application registers with the Auth server
Onboarding process
input -> redirect url
http://localhost:8555/authorization-code/callback
output -> client-id, client-secret
public confidential
client-id - bknr32pvcbmaooi8bdl2s5n32
client-secret(sensitive-information)
7aksdp505ahs9qji7lr9h6s7tekaul38u9cn4q2i1gl7omfisab

Step-1
- Auth server exposes a metadata url
- Also referred as well known url
- Public endpoint
https://dev-7858070.okta.com/oauth2/default/.well-known/oauth-authorization-
server
https://cognito-idp.ap-south-1.amazonaws.com/ap-south-1_zY1QIrdPp/.well-
known/openid-configuration

Details:
issuer: https://dev-7858070.okta.com/oauth2/default,
https://cognito-idp.ap-south-1.amazonaws.com/ap-south-
1_zY1QIrdPp",
authorization_endpoint:
"https://dev-7858070.okta.com/oauth2/default/v1/authorize"

https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/authorize",
token_endpoint: "https://dev-7858070.okta.com/oauth2/default/v1/token"
"https://classpath.auth.ap-
south-1.amazoncognito.com/oauth2/token",
registration_endpoint: "https://dev-7858070.okta.com/oauth2/v1/clients"
jwks_uri: "https://dev-7858070.okta.com/oauth2/default/v1/keys"

Anonymous user try to access the client application


Step-2
Redirect the anonymous user to the Authorize endpoint of the Auth server

https://dev-7858070.okta.com/oauth2/default/v1/authorize
https://dev-7858070.okta.com/oauth2/default/.well-known/oauth-authorization-
server
Query parameters
client_id - bknr32pvcbmaooi8bdl2s5n32
response_type - code - fixed
scope - openid - fixed
redirect_uri http://localhost:8555/callback/url
state - 9fe74f91-346d-4b9b-8884-c2e59c2fcce2

Construct the url


https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/authorize?
client_id=bnd27lln9tdc1b8dsmn7q2dkl&response_type=code&redirect_uri=http%3A%2F
%2Flocalhost%3A8555%2Fcallback%2Furl&scope=openid&state=975733f8-f1c6-4eeb-918b

http://zoho-app/authorization-code/callback?code=X9OYhYxp47-
xPOEH1vqoej92uaQcFssWupNMEZVRLAM&state=975733f8-f1c6-4eeb-918b

authorization code - X9OYhYxp47-xPOEH1vqoej92uaQcFssWupNMEZVRLAM


Step-3
- Auth server will authenticate the user
- If the authentication is successful, then seeks confirmation to authorize
- If Authorized, will return back the auth_code in the response
http://localhost:8555/authorization-code/callback?code=Hb9sfCcUtmEDkPOcThlDMGCcV-
ix79P7U4PTcdW4Fz4&state=975733f8-f1c6-4eeb-918b-9bd8f9d86194 - auth code = P-
PYO8IE6SQo4ndejfD_UQ38e0h_ePFlF14Pw3kZxE0
auth code - e099dac6-6459-4bb4-8369-cc1a76994d21
- validity is 5 minutes
- one time use
- auth code is not secure
- it is transimitted over an untrusted network

Step-4
- The client applications( Backend application) uses the auth code
- Exchanges the auth code with the access token using the POST request with the
token endpoing
- POST method
- Endpoint - token endpoint
https://dev-7858070.okta.com/oauth2/default/v1/token
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/token
- Body
grant_type - authorization_code
redirect_uri - http://zoho-app/authorization-code/callback
code - <~>
- Basic authentication
client-id - 0oaencvv1nJNwQEEB5d7
client-secret - DehMoTrVmwApbj_MvhQq5-
muFcU_U8zlqBEIUPJTtgEJrJCUZlmHn70aNw3sdeNe
Response
{
"id_token":
"eyJraWQiOiJBRFNHSElPSnU2U0UwTlhicmxyXC9TZDZ3NmVpSWNPbFwvYzdJUWdiRHpKeVE9IiwiYWxnIj
oiUlMyNTYifQ.eyJhdF9oYXNoIjoiTEFENzNjeFVEMUNPR3pPbldIMlpPdyIsInN1YiI6IjAxYTdlNWM0LT
g1OGMtNGNkYy1hODBhLTkxOTI2NGQ1NzE3MyIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwc
zpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJcmRQ
cCIsImNvZ25pdG86dXNlcm5hbWUiOiJwcmF2ZWVuIiwib3JpZ2luX2p0aSI6Ijk2N2ZjNWM3LTQxNGEtNGY
4Yi1iMzU3LTMxNDMwYjZmNmU3YSIsImF1ZCI6ImJrbnIzMnB2Y2JtYW9vaThiZGwyczVuMzIiLCJldmVudF
9pZCI6IjdhMmMyMjBiLTZhZWEtNGIzOC04NjZjLTllNzA4YTExZTAzNCIsInRva2VuX3VzZSI6ImlkIiwiY
XV0aF90aW1lIjoxNjg3MjU3OTY2LCJleHAiOjE2ODcyNjE1NjYsImlhdCI6MTY4NzI1Nzk2NiwianRpIjoi
ZjY3OTkxMjgtZjQ4NC00OTc1LThhOGYtMGIzZGI5ZmIxNTNhIiwiZW1haWwiOiJwcmFkZWVwLmt1bWFyNDR
AZ21haWwuY29tIn0.jr19FPDjQvXYdiDgN6jcg9_74hUE7ObghXQRRLVAMk_gLA2GQ3LKgLZafuNyLOxWV1
WG7OFtgj8JnXxN073fl5WxXVrjJfHO4C5mXXIxllwCYhEiJBmMwRv_bJO0XqmFnfqEGXN5QpPtFGTyNCuIK
Qej55dAnbt2ZzVJ3ZS10Lj5zPUneijY_7ARU2FchdcRQySkxi4sFJ2ACYAcDBUpySuF7x59ZQCXmmfDbld7
zuPAcyoGsBXTqMtkA10LPvxQphsVAUZ2IxOtz7_QrMHmUoPLd12_3ZawH8o7zF9jdgviAkenJSlcTVfPKWI
uTOXupT0Z_Q1wzIobMXoGrPSSXA",
"access_token":
"eyJraWQiOiJPd1ZTeGVlZlloK01mTUNuYTBrSGEwYjVxTWFxTmErbmtSazdQakVibWFVPSIsImFsZyI6Il
JTMjU2In0.eyJzdWIiOiIwMWE3ZTVjNC04NThjLTRjZGMtYTgwYS05MTkyNjRkNTcxNzMiLCJpc3MiOiJod
HRwczpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJ
cmRQcCIsInZlcnNpb24iOjIsImNsaWVudF9pZCI6ImJrbnIzMnB2Y2JtYW9vaThiZGwyczVuMzIiLCJvcml
naW5fanRpIjoiOTY3ZmM1YzctNDE0YS00ZjhiLWIzNTctMzE0MzBiNmY2ZTdhIiwiZXZlbnRfaWQiOiI3YT
JjMjIwYi02YWVhLTRiMzgtODY2Yy05ZTcwOGExMWUwMzQiLCJ0b2tlbl91c2UiOiJhY2Nlc3MiLCJzY29wZ
SI6Im9wZW5pZCIsImF1dGhfdGltZSI6MTY4NzI1Nzk2NiwiZXhwIjoxNjg3MjYxNTY2LCJpYXQiOjE2ODcy
NTc5NjYsImp0aSI6ImZmZmQ2MTBjLWZiMWItNGFhMS04ZjQ5LWM0YjdjNjRjYWUxZiIsInVzZXJuYW1lIjo
icHJhdmVlbiJ9.Fd_XL6UcuGvewg8QEC6JNjr7apXbyK9EgwzadWNDLQgEBx3puYbTkiiy9-
7pukhf5NhCcUDMeF5_cjBsKIM0MHoJ7ZeW_ErpHSbRL_dX2UeGPq88jN0bnZb-7lgwJXo2pR87YWxna-
ci7zyYumEhD9s_phU4D9V_56KKkJnULKrlpKd4N1R1xJ8Z8KYfpbNIWW5TxqQgK0cmvOW_YORcsdppBwxVH
imkzwIG9TQewaKFZBWhfsDguYn-B5vNFimK2r_5laobGgqxdsxD7NG_y-
Jsuj2e80Qdlif6JUiGjT_InPwM4b6PV8dshuf0D_lthf2l3zr1OcF4qdhN7JY6cg",
"refresh_token":
"eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiUlNBLU9BRVAifQ.eNCU5gpGSejY3c8sAFy
L1XQbv47sKNmawjHblhI_puGdJOu9tlzYPVX0LHmhzinXfyA310luuj4xQKUFDAxmaeXn62RUb1IBnvcWpt
G8rU3Zm51eaSd2DTedcdUgk7ULDfNuXneYtcNN0BIVWIQbDA-nMZ6Op-I5VL-
vKy3YEoRxi19sszSLZJWrCtOh1tfABjoSscgpSWOu3eJwYzBKvwiDirxI5Xh4uLT7zdxNjHo-
VEqUl1I4K5EGIK2yrkwLo_EH07xe_9Ubuozm7lM5mL4HfKGR6zVrW-Osr4BjoQaTFtpUnHf8GBmce0E-
9o3nRKKbu45Awyu3IzJnUI99yw.-
huLN6bs7UR3AwIR.Ph61e_7HugWf8k_9wLLlFQoypIkkFqFVQgR2ic9Sfqh0j4OTNv85FouZZioQoYDEDNw
H-lPxtcS5GFViWWkqx0Np9-cpZDcVYmEK-
FaaHKeXleHiNhlS8d8DYgyBemgckwgj5SLcEKWu7lYxJPoDiW3tVfClN_p6Oji9sNLp2_1YQwPo2X1yrTzO
5vIj7lDrrTpfhVz-
jjD9tgzkRYLINK4F7VDNJcqApTVrYF0T5Zkg0iBpSOjGIt_ZwLUMsIagszR1MTxuBL_8vQjxazopLgNJerX
8cgsOMNdxtZc9T1xiSMAAx6srIM3Vw7B_AUKJIqY_QgCoO8C6TX3RuvzB5U98m0XAfp2Mq5vOFlwD6nB2A0
3Xi400eKZ_Quc0-
JFlx9syajqo69HB8PbiOGZOKzq4IaPPQjMAmAAxrkCHIxIbhuDIOrlqmOwY5tyoCy0Olv-
Y0rfx9WKVvBzJ9lQXcxwmht_ggS5RiReMNFi8baERlYaV46gK4UVv8n-
FZ_UErUVzQ9cwzAcktAqQUvMcacD-FGStTPwR-BILDmbLpd0i72YFzrl6jtL-
5fEmvx5ew_BbgavpEDa04pnWfCFq3oVDHJTgLhYZYLdl35YjB3B-
ByTwy7hlj2xensq5qfDqpnaKfCZOcbWa52tdaXo52BC6Hr7ZP3vo7sJxWjVB0WDY5Wv5ApySZQNvyPfSsb2
jeixrov5njufalUPWXI2vfr9rrAAHemFneJ7IDgKndO8mTCUIcSsj_xLgrCFpw1ZnVqWRae6WI6Y2E14moZ
IQmW9S8axPcyUIdef9thT6dPIvEwOGFZGKx8ctaulzRVCIujU-
m6jQmNJU87FZpNQgQxNF8Pg7xbZqvY42IPSimF1adOiHLwKkAa566NDurDf95qCZI00G6hpC_eITrc2giW0
e26CjHtIuWs8IB96ZYubfxDdl7jgtD5EpDMFjEOgpOMdo6_AUzLoeoWJ62Dkm-
ChvOpAlCdy13j3mVnLP2u6FWVx5x8YlAh_MuEszEODt7bph5RAJwGxEQMK_gQ8gKy1_TwsVzdxFPE8VfIal
Sle6P34DBQjWpdllt0ILoxx-
SlfPvCpNCKrhNR0FNkutpYKLPVw0BS0jduiaenKeBNOtVXOakoHIKY1EU4KO7TdHGVjlwXkfqfFThFsQJza
hnVMWck02EOn4LkLolXZztihOpIokr9nBWYPiLxXqbXxIA975lpMbm0rrUvhSPtOe1Rz6phydvXCQOYrxfM
W75lbGXctaTSQftysr.sNCvWx4OrgjOA-K4fR8r4g",
"expires_in": 3600,
"token_type": "Bearer"
}

JWT token
- Crytographically secure token
- Self contained token
- Not to store sensitive information
- It is a digital signature of user identiry
- short lived token
- Validate the token - https://jwt.io

Step-5
- The client application will send the token to the resource server in the
Authorization header
- The resource server will decode the token
- Extract the claims
- Allow/Deny the resource access
- 403 in case of forbidden usecas

OAuth2-client-application
codebase - git clone https://gitlab.com/10-10-vmware-microservices/ekart-oauth2-
client.git
https://gitlab.com/14-11-synechron-microservices/ekart-oauth2-client.git

https://drive.google.com/drive/folders/12SiW82_ukoxZvD6EaPCLGC7bpewopdGo?
usp=sharing
https://drive.google.com/drive/folders/1KJxmx9SVkxvLmXU76jI7_cAKPwXGLzl8?
usp=drive_link
https://drive.google.com/drive/folders/198XyNqpJtb_8CDRXDc4vWfn9sC_MzeuU?
usp=sharing

You might also like