Sep 13, 2022
Sep 13, 2022
- servers
- storage
- databases
- software
- networking
- analytics
over the internet (or "the cloud") on a pay-as-you-go basis.
Benefits
- This enables users to scale up or down their resources quickly and easily to meet
changing business demands
- Paying for what they actually use.
Examples
- Amazon Web Services (AWS)
- Microsoft Azure
- Google Cloud Platform.
Day1
Motivation for Cloud adoption
- On Demand resources
- Cost effective
- No upfront costs
- Reduced operational costs
- Availability
- HA and DR
- Security
- shared responsibility
- Global footprint
- Global presence
- Increase innovation
- Agility
- Vertical scaling
- Horizontal scaling
Cloud Providers
- AWS - IaaS, PaaS and SaaS
- Azure - IaaS, PaaS and SaaS
- GCP - IaaS, PaaS and SaaS
- IBM - IaaS, PaaS and SaaS
- Oracle - IaaS, PaaS
- DigitalOcean - IaaS, PaaS
- Linode - IaaS, PaaS
- Heroku- PaaS
- MongoLab - PaaS - Database as a Service
- Confluent - PaaS - Ma I'maged Kafka as a Service
- Okta/Auth0 - Paas - Identity as a Service
Cloud services
- Core Services
- Compute, Storage and Networking
Deployment models
- On Prem - Private cloud
- Defence sectors
- Integellince service
- Govt sector projects
- Hybrid
- Migration plan to Cloud
- Part of the application on prem and other service on Cloud
- Public cloud
- All services are leveraged by cloud
AWS API
- Management console (UI)
- Users
- AWS CLI
- scripting
- SDK
- Programming languages
- Support for Java, Python, Go
- CloudFormation
- Infrastructure as a Service
Global services
- AWS Identity and Access Management (IAM): IAM is a global service that enables
you to manage access to AWS resources across all regions. IAM users, groups, and
roles you create are not tied to a specific region.
- AWS CloudFront: CloudFront is a global content delivery network (CDN) service. It
has edge locations spread across multiple continents and is not limited to a single
region.
- Amazon Route 53: Route 53 is a global DNS service. It allows you to manage DNS
records and resolve domain names worldwide without being constrained by region-
specific boundaries.
- AWS WAF (Web Application Firewall): WAF is a global service that provides
protection against web application exploits and attacks. You can configure WAF
rules globally and apply them to your web applications in any region.
- AWS Certificate Manager (ACM): ACM is a global service that simplifies the
process of provisioning, managing, and deploying SSL/TLS certificates for your AWS
resources. Certificates obtained through ACM can be used in any AWS region.
- Amazon CloudWatch: CloudWatch is a global monitoring service that collects and
tracks metrics, logs, and events from various AWS resources. It operates across
multiple regions, allowing you to monitor and analyze data from a centralized
location.
- AWS Direct Connect: Direct Connect is a global service that provides dedicated
network connections between your on-premises infrastructure and AWS. It is
available in various locations worldwide, enabling private and reliable
connectivity.
Amazon Macie:
This is a data security and privacy service that uses machine learning to
automatically discover,
classify, and protect sensitive data in AWS. It is currently available in only 8
regions.
Amazon GameLift:
This is a managed service for deploying, operating, and scaling session-based
multiplayer games.
It is currently available in only 6 regions.
AWS RoboMaker:
This is a service that makes it easy to develop, simulate, and deploy intelligent
robotics applications
at scale. It is currently available in only 6 regions.
On prem
- Infra will be managed by the Organizations
- Pre cloud
Usecases
- Org has invested huge amount in Infrastrucuture
- Org has the operational capability and is very complex
- Org has strict regulation/compliance, Ex: Defence, R&D, Juciary, Govt
organization
Hybrid
- Part of the infra will be on on-prem and part of infra is cloud
- Migration application
Usecase
- During Migration
- Company is evaluating the cloud strategy
- Licence/Lease with the On-Prem is about to terminate
- Leverage only few services from cloud
Cloud
- All the services are hosted on the cloud
Usecases
- For cloud native applications
- Greenfield projects
- startups
- Microsoft Azure
- Equivalent of IAM (Identity and Access Management) is Azure Active Directory
(Azure AD).
- Cloud-based identity and access management service
- Allows you to manage and secure your organization's users, devices, and
applications.
- It provides features for authentication, single sign-on (SSO), role-based access
control (RBAC), multi-factor authentication (MFA), and more.
MAC/Linux
cd ~/.aws
cat config
cat credentials
Windows
C:\Users\<username>\.aws
notepad config
notepad credentials
https://awscli.amazonaws.com/v2/documentation/api/latest/index.html
EC2 commandline reference -
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/index.html
Run EC2 instances -
https://awscli.amazonaws.com/v2/doumentation/api/latest/reference/ec2/run-
instances.html
Attach EC2FullAccess to the cli-user
aws ec2 run-instances --image-id <ami-0e89f04ea160a6f51> --instance-type t2.micro
--key-name aws-ec2-connect --profile aws-training
aws ec2 terminate-instances --instance-ids <instance-id> --profile aws-training
# rm credentials
# aws ec2 run-instances --image-id ami-0e89f04ea160a6f51 --instance-type t2.micro
--key-name aws-ec2-connect
Lab-1
Create a key pair
default values
- pem for mac users
- ppk for Putty tool
Instance
- Click on Launch instances
- Name -> test-vm
- OS Image -> Amazon-linux-2
- Instance type -> t2-micro
- Select the key-pair created
- Select the existing security group
- Go with the default storage (8 GB)
- Click on the launch instance
Navigate to Management console and copy the public IP address of the instace
Navigate to the browser and hit http://<ip-address>:80
Lab-2
Add the below under user data script in the Advance details section
Status checks
- https://support.arcserve.com/s/article/202041339?language=en_US
- AWS uses "Launch Templates" or "Launch Configurations" to define and manage the
specifications for launching instances.
- The equivalent in Azure is "Virtual Machine Scale Sets" (VMSS) which lets you
create and manage a group of identical, load balanced VMs.
- In Google Cloud Platform (GCP), the equivalent service is "Instance Templates".
They allow you to specify the settings for your instances, which can be used to
create instances in a managed instance group or individually.
AWS CLI
Installation - https://docs.aws.amazon.com/cli/latest/userguide/getting-started-
install.html
verification of aws-cli - aws --version
Command Line reference document - https://docs.aws.amazon.com/cli/latest/index.html
Creating an IAM user
aws iam create-user --user-name test-user
Create EC2 instance - https://docs.aws.amazon.com/cli/latest/reference/ec2/run-
instances.html
aws ec2 run-instances --image-id ami-072ec8f4ea4a6f2cf --instance-type t2.micro --
key-name pradeep-ec2-keypair
Terminate EC2 instance - i-0275888172286a8b1
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0
Example
aws ec2 run-instances --image-id ami-0c768662cc797cd75 --instance-type t2.micro --
security-group-ids sg-065ce32c0989b1954 --key-name ssh-connect
Security - IAM
Identitiy/Principal can be
- User group
- User
- Role
ARN - https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
{
"Version": "2012-10-17",
"Statement": [
{
# effect - Allow/Deny
"Effect": "Allow",
#verbs
"Action": "*",
# resource
"Resource": "*"
}
]
}
Lab-3
Storage
- Three types of Storage
- Block - EBS
- File Storage - EFS
- Object storage - S3
Block storage
- Elastic Block storage (EBS)
- The data is split into discrete blocks
- For efficient read and write
- There are two types of disk - SSD, HDD
- Upfront provisioning of capacity
- You will be charged for the disk space and disk input-output (I/O)
- EBS volume is tied to a az
- At any given point in time, it can be attached to only one EC2 instance
- One EC2 instance can be assigned with multiple EBS volumes
- EBS volume is specific to an availability zone
- The EC2 instance should also be in the same AZ as of the EBS volume
- Direct attach storage and Storage area Network
- Azure - Azure Disk storage
- GCP - Google persistent Disk
- Types of EBS volumes
- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
- General purpose (GP)
- IO
- SSD
- HDD
Volume size:
- The IOPS performance of an EBS volume is directly proportional to the volume
size.
- Larger volumes generally have higher IOPS performance than smaller ones.
Burst performance:
- Some EBS volume types, such as gp2, provide burst performance.
- This means that the volume can deliver IOPS beyond its baseline level for a
limited time
depending on the volume size.
Provisioned IOPS:
- The maximum IOPS you can provision for an io1 volume is 64,000.
To calculate the IOPS for an EBS volume, use the following formula:
For example, if you have a 500 GiB gp2 volume, the IOPS performance would be:
IOPS = 500 x 3 = 1500 IOPS (gp2 volumes deliver 3 IOPS per GiB)
If you have an io1 volume with a provisioned IOPS of 5000 and a volume size of 100
GiB, the IOPS performance would be:
IOPS = 5000 (provisioned IOPS)
Storage -
1. Attach the EBS volume to the EC2 instance from the management console
2. Login to the EC2 instance
3. Run the df -h command
4. Run the lsblk command - to list the block size
5. Run the command to check the file system - sudo file -s /dev/xvdf
6. To create the file system, run the command - sudo mkfs -t xfs /dev/xvdf
7. Create a directory with the command - sudo mkdir -p /app/data
8. Mount the volume to the /app/data directory with the command - sudo mount
/dev/xvdf /app/data
Add files into the shared volume
cd /app/data
sudo touch file.txt
10. Unmount the volume to the /app/data directory with the command - sudo
umount /app/data
11. Detach the volume from the ec2 instance in the EC2 dashboard volume
Optional
Create another ec2 instance in the same az name - second-instance
In the volume section, detach the volume and attach the volume to the second
instance
SSH to the second instance
Run the command to check the file system - sudo file -s /dev/xvdf
Create a directory with the command - sudo mkdir -p /app/data
Mount the volume to the /app/data directory with the command - sudo mount
/dev/xvdf /app/data
Check for the file.txt using the ls inside the /app/data directory
Use cases:
- Creating a backup
- encrypt the volume
- Move the volume from one AZ/Region to another
- Change the storage type
Use-cases
- Can be used as a file storage
- To create repositories
- To created shared file system
Lab - https://docs.aws.amazon.com/efs/latest/ug/wt1-test.html
EFS- volume
Lab on EFS
- Create a NFS security group
- allow inbound connection to NFS port(2049) from anywhere (0.0.0.0/0)
- Create a EFS volume in multiple AZ
- Create two EC2 instances in two different availability zones
- Login to both the instance
- Run the following commands in bothe the instances
- sudo yum -y update
- sudo yum -y install nfs-utils
- mkdir -p ~/data
- In NFS, mount target to the availability zones
- Edit the default security group being used by NFS to allow all traffic from
security group used by the instances
- Mount the NFS using the instructions in EFS attach screen
sudo mount /dev/xvdf /app/data
sudo mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-
017979aaa4c5bdd13.efs.ap-south-1.amazonaws.com:/ ~/data
Validate with df -h on ec2 instances
Troubleshooting steps
- https://docs.aws.amazon.com/efs/latest/ug/troubleshooting-efs-mounting.html
- DNS server not available
- service rpcbind restart
- service nfs restart
Object storage
- Unstructured data
- Cannot mount this to the instances
- Access object via the S3/HTTPS protocol stored in bucket
- S3 bucket is specific to region
- objects are stored using key value pair
- key should be unique
- To modify the object, you need to download the entire object, modify and upload
the object
Properties of S3
- Supports Versioning - Can be used to store multiple version of the document
- Host Static websites
- Encryption at rest using AES-256
Use-cases
- To store media, audio, log files, documents
- Host static websites
- Data in encrypted at rest using AES-256
- Supports versioning
- Managed solution and is a serverless offerring
- Data in is free
- Data out is chargeble
Amazon S3 (Simple Storage Service) is a widely used object storage service provided
by AWS.
Many companies and organizations across various industries utilize S3 for their
storage needs.
While it is impossible to provide an exhaustive list, here are some well-known
companies that use Amazon S3:
S3 pricing - https://aws.amazon.com/s3/pricing/?p=pm&c=s3&z=4
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::classpathio-media/*"
]
}
]
}
ARN - https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html
- Amazon Resource name
- Unique string representing the resource
- Format of ARN - arn:partition:service:region:account:resource
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": "*"
}
]
}
5. Attach the policy with a Role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::703363723066:user/admin_new",
"arn:aws:iam::688964866425:user/admin",
"arn:aws:iam::008152901260:user/admin",
"arn:aws:iam::915465141737:user/admin"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
- S3 pricing - https://aws.amazon.com/s3/pricing/
RDS Lab
- Create a RDS
- Login to the EC2 instance
- Update the packages - sudo yum -y update
- Not needed - Install the Mariadb server - sudo dnf install mariadb105-server
- Install Mysql client
-
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToInstance.html
- sudo dnf install mariadb105
- Connect to the Mysql Server - mysql -u admin -h <db-endpoint> -p
mysql -u admin -h database-1.c4xbzwxvwwsz.ap-south-1.rds.amazonaws.com -p
- Enter the password in the console:
DynamoDB
- Fully Managed NoSQL Database Service:
Amazon DynamoDB takes away the complexity of managing a database, handling
tasks like hardware provisioning, setup,
configuration, replication, software patching, and cluster scaling.
- Seamless Scalability:
DynamoDB allows you to scale up or down your databases according to your
application needs,
without any downtime or performance degradation.
- Event-driven Programming:
With DynamoDB Streams, you can capture table activity, and trigger specified
actions based on this data,
perfect for real-time processing.
- Automatic Partitioning:
To support your throughput requirements, DynamoDB automatically partitions your
tables over an adequate number of servers.
- Built-in Security:
Amazon DynamoDB provides built-in security features like encryption at rest,
allowing you to secure your data and meet compliance requirements.
- Low Latency: DynamoDB is designed to provide consistent, single-digit millisecond
latency for read and write operations,
making it suitable for high-speed applications.
- In-memory Caching: DynamoDB Accelerator (DAX) provides an in-memory cache for
DynamoDB, to deliver faster access times for frequently accessed items.
- On-demand and Provisioned Capacity Modes: DynamoDB allows you to choose between
on-demand capacity (for flexible, pay-per-request pricing) and provisioned capacity
(for predictable workloads and cost efficiency).
- Global Tables: This feature enables multi-region replication of tables, providing
fast local performance for globally distributed applications.
- Integrated with AWS ecosystem: As part of the AWS ecosystem, DynamoDB integrates
seamlessly with other AWS services like AWS Lambda, Amazon EMR, Amazon Redshift,
and Amazon Data Pipeline.
Understand the most common query patterns your application will have, such as how
data will be retrieved, updated, or queried.
Identify the primary ways data will be accessed to design an efficient partition
key.
Uniform Data Distribution:
Choose a partition key that distributes data uniformly across partitions to avoid
"hot" partitions with excessive read or write activity.
Uneven data distribution can lead to performance bottlenecks and throttling.
Cardinality and Selectivity:
Opt for a partition key with high cardinality (a large number of distinct values)
to distribute data evenly.
Ensure the partition key has good selectivity, meaning it's used frequently for
queries and provides a diverse range of values.
Query Isolation:
Consider how well the chosen partition key isolates different types of queries from
each other.
Queries with different partition keys can run concurrently without causing
contention or performance issues.
Data Growth and Size:
Anticipate the potential growth of data over time and choose a partition key that
can accommodate future expansion.
Avoid partition keys that lead to partitions becoming too large, as it can impact
performance and scalability.
Factor in the Item Size: If your items are larger than 1 KB, you'll need more WCUs.
Lab:
- Create a DynamoDB table. It is region specific
- Name of the table - <your-name>-employees
- Partition key - id, type -> Number
- Sort key - name, type -> String
Using CLI
aws dynamodb put-item --table-name employees --item '{"id": {"S": "101"},"name":
{"S": "John Doe"}}'
aws dynamodb put-item --table-name employees --item '{"id": {"S": "101"},"name":
{"S": "John Doe"},"city": {"S": "Mangalore"},"zip": {"S": "577142"}}'
SQS
- A Serverless offerring from AWS
- Offerred in Standard and FIFO
- Standart type provides atleast once delivery semantics
- FIFO type provides exactly once delivery semantics
- Point to point and decouple Producer from Consumer
- Acts as a buffer
- The producer sends the message to the Queue
- The consumer polls the message from the Queue
- The visibility time specifies till how long the message will be available for the
consumer to read
- The consumer should delete the message after processing
Lab:https://sqs.ap-south-1.amazonaws.com/831955480324/messages
aws sqs send-message --queue-url
https://sqs.YOUR_REGION.amazonaws.com/YOUR_ACCOUNT_ID/YOUR_QUEUE_NAME --message-
body "Your message text here"
aws sqs receive-message --queue-url
https://sqs.YOUR_REGION.amazonaws.com/YOUR_ACCOUNT_ID/YOUR_QUEUE_NAME --max-number-
of-messages 1
aws sqs delete-message --queue-url
https://sqs.YOUR_REGION.amazonaws.com/YOUR_ACCOUNT_ID/YOUR_QUEUE_NAME --receipt-
handle YOUR_RECEIPT_HANDLE_HERE
AWS CLI
aws --version
aws s3api list-buckets
aws configure
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListAllMy niBuckets",
"s3:DescribeJob",
"s3:ListBucket",
"s3:GetBucketVersioning",
"s3:GetBucketPolicy"
],
"Resource": "*"
}
]
}
10.0.0.0/24 - VPC
https://aws.amazon.com/premiumsupport/knowledge-center/vpc-ip-address-range/
- VPC stands for Virtual private cloud
- It is a private cloud in AWS
- Defines the network boundary
- VPC is a regional resource
- VPC spans across AZ
- There is a default VPC for every region
- You can create up to 5 VPC per region
- Defines the IP address pool
- Every instance/loadbalancer/EFS/kafka-worker-nodes/eks-worker-nodes will borrow
the private IP address from the Pool
- You can request for more VPC per region by raising a support ticket with AWS
support team
- Two different VPC cannot have the overlapping IP address with vpc peering
- Practical use cases of VPC is
- For creating network isolation for a product with multiple environments
- For creating network isolation for different products/applications
Networking topics
- IP address represents a unique address/coordinate within a network
- IP address can be in IPv4 or IPv6
- IPV4 is made up of 4 octets 10.0.0.0
- Each octet is made of 8 bits
- 8 bits - 2 pow 8 - 256 bits, range - 0 - 255
- IP-V4 range 0.0.0.0 - 255.255.255.255
- Private VPC - you chose an IP address range
Useful tools
- https://cidr.xyz
- https://www.ipaddressguide.com/cidr
Lab
- Creating a VPC
- Create a VPC in the Mumbai region
- select VPC only radio button
- Name of the VPC - <name>-lab-vpc
- Select the IPv4 CIDR manual input radio button
- Enter the CIDR block - 10.0.0.0/24 range 10.0.0.0 - 10.0.0.255 (256 IP
addresses)
- Leave all the other fields to default values and create the VPC
Subnet
- Represents subset of the IP address pool
- Group related instances into subnets
- Subnets are specific to Availability zone
Subnet-1
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- public-a
- Availability zone - ap-south-1a
- CIDR block - 10.0.0.0/26
- Range of the CIDR block - 10.0.0.0 - 10.0.0.63 (total 64 IP addresses)
Subnet-2
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- public-b
- Availability zone - Volume
- CIDR block - 10.0.0.64/26
- Range of the CIDR block - 10.0.0.64 - 10.0.0.127 (total 64 IP addresses)
Subnet-3
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- private-a
- Availability zone - ap-south-1a
- CIDR block - 10.0.0.128/26
- Range of the CIDR block - 10.0.0.128 - 10.0.0.191 (total 64 IP addresses)
Subnet-4
- Create a subnet under the lab-vpc
- Select the lab-vpc
- Name of the subnet- private-b
- Availability zone - ap-south-1b
- CIDR block - 10.0.0.192/26
- Range of the CIDR block - 10.0.0.192 - 10.0.0.255 (total 64 IP addresses)
https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html
10.0.0.0: Network address.
10.0.0.1: Reserved by AWS for the VPC router.
10.0.0.2: Reserved by AWS.
10.0.0.3: Reserved by AWS for future use.
10.0.0.255: Network broadcast address.
We do not support broadcast in a VPC, therefore we reserve this address.
Internet Gateway
- Allows internet connectivity (both inbound and outbound) with the VPC
- Create an Internet gateway called lab-igw
- Internet gateway has a one-to-one relationship with the VPC
- Attach it to the lab-vpc
Route table
- Route table is an ledge for specifying the route entries
- Route table associates the subnets to Internet gateway/NatGateway
- A default route table is created for every VPC
- The name of the route table is main and it cannot be deleted
- create a route table public-route - in the lab-vpc
- To the public-route table associate the two public subnets(public-a and public-b)
explicitly
- Edit the route and entry with 0.0.0.0/0 for internet-gateway
SSH Connect to instance in the private subnet from the instance in the public
subnet
copy the ssh-keypair to the instance in the public subnet
Format - scp -i <pem-file> <pem-file> ec2-user@<public-ip_ec2>:/tmp/
example - scp -i "ec2-connect-new.pem" soure_file destinationlocaiton
example:
scp -i "aws-ec2-connect.pem" aws-ec2-connect.pem [email protected]
1.compute.amazonaws.com:/tmp/
In case you have deleted the default VPC by mistake, follow the below instructions
to create a default VPC
https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html
Day-2
1. aws configure
enter the access key
enter the secret key
enter the region ap-south-1
enter the output format json
enter the command aws s3 ls
AWS cli-reference - https://docs.aws.amazon.com/cli/latest/reference/s3api/create-
bucket.html
EBS-volume
Bucket policy
Policy
{
# version - fixed
"Version": "2012-10-17",
"Statement": [
{
# unique name for the statement
"Sid": "read bucket",
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "read bucket",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::classpathio-aws-training"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:CreateTags",
"ec2:ModifySnapshotAttribute",
"ec2:ResetSnapshotAttribute"
],
"Resource": [
"*"
]
}
]
}
import boto3
# Connect to region
ec2 = boto3.client('ec2', region_name=reg)
# Create snapshot
result =
ec2.create_snapshot(VolumeId=volume['VolumeId'],Description='Created by Tanmaya\'s
Lambda backup function ebs-snapshots')
volumename = 'pradeep-lambda'
import boto3
from botocore.exceptions import ClientError
return
# Connect to region
ec2 = boto3.client('ec2', region_name=reg)
S3 bucket
Resource based policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::classpathio-test-bucket/*"
}
]
}
Useful repositories
- https://gitlab.com/classpath-docker/docker-lab
Steps to install docker - https://docs.docker.com/engine/install/
Steps to install docker on AWS -
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-
image.html
- sudo yum -y update
- sudo yum install -y docker
- sudo systemctl start docker
- sudo systemctl enable docker
- sudo usermod -a -G docker ec2-user
- exit
- reconnect to the console
- docker info
docker container ls
curl http://localhost
curl http://localhost:81
docker container run -d -p 80:80 nginx
docker container ls
curl http://localhost
command to login to the container - docker exec -it <container-id> /bin/bash
cd /usr/share/nginx/html
echo "hello-world" > index.html
exit
command to stop the container - docker container stop <container-id>
command to start the container - docker container start <container-id>
command to restart the container - docker container restart <container-id>
command to delete docker container rm <container-id>
cd 02-hello-web
docker build -t mynginx .
docker images
docker container run -d -p 80:80 mynginx
cd 03-express-crud-app
docker build -t orders-api .
docker images
docker container run -d -p 3000:3000 orders-api
curl http://localhost:3000/
- To pull an image from the public repository you do not need credentials
- To pull an image from the private repository you need the credentials
- To push an image to public/private repository you need the credentials
docker login
username: classpathio
password: Welcome44
078711992964.dkr.ecr.ap-south-1.amazonaws.com/order-microservice
docker tag order-microservice classpathio/<yourname>-order-microservice:latest
docker push classpathio/<yourname>-order-microservice:latest
aws configure
access key - AKIA4DNDJCFCAAHB5G
secret key - k4OynFZhmMubn6Ksb7vmGPhpGvfEK35v
region - ap-south-1
outputformat - json
Kubernetes
Setting up Kubernetes cluster
- Cluster contains 2 components
- Control plane
- Data plane
- Control plane is managed by the platform
- AWS - EKS - Elastic Kubernetes Service
- Azure - AKS - Azure Kubernetes Service
- GCP - GKE - Google Kubernetes Engine
= On Prem
- To Setup Kubernetes
- AWS Management Console
- Kops
- eksctl
- Kubeadm
- git clone https://gitlab.com/12-12-vmware-microservices/kubernetes-manifests.git
- mkdir -p ~/.kube
-
Output:
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.0
CloudFormation
- Setup the template
- Upload the template to S3
https://s3.us-west-2.amazonaws.com/cloudformation-templates-us-west-2/
EC2InstanceWithSecurityGroupSample.template
- Create a stack with the Template
- Run the stack
Local zones
https://aws.amazon.com/about-aws/global-infrastructure/localzones/locations/
Day-2
AWS CLI reference
https://docs.aws.amazon.com/cli/latest/reference/ec2
https://gitlab.com/26-09-microservices/order-microservices
sudo yum -y install git
git clone https://gitlab.com/26-09-microservices/order-microservices.git
Kubernetes
https://gitlab.com/kubernetes-workshop2
Helm commands
helm version
helm repo add stable https://charts.helm.sh/stable
helm repo update
helm repo list
helm install demowp stabe/wordpress
helm delete demowp
cd jenkins
Running Terraform
terraform init
terraform plan
https://kubernetes.io/docs/tasks/tools/
kubectl version
- client version
- server version - null
Dependencies
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.106</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-sqs</artifactId>
</dependency>
---
AWS Code commit
- Current repo - https://gitlab.com/12-12-vmware-microservices/orders-
microservice.git
Run the command - git clone https://gitlab.com/12-12-vmware-microservices/orders-
microservice.git
- Navigate inside the order-microservice directory
- git remote show origin
- To pull code from public repo you do not need credentials
- To pull/push code to/from the private repo, you need to be authenticated and
authorized
- Two types of authentication
- Git
- Uses Openssh protocol for authentication
- More secure
- can be used for automation
- HTTP
- Uses username/password for authentication
- Needs user to enter username and password
- Run the ssh-keygen command
- It will generate both public and private keys
- public key name - id_rsa.pub
- private key name - id_rsa
- On windows - C:\Users\<your-name>\.ssh directory
- On Linux/Mac - ~/.ssh directory
- Remove the link to the old repo using the command - git remote remove origin
- Add the new link to the new repo on AWS Code-commit
git remote add origin >git remote add origin ssh://git-codecommit.ap-south-
1.amazonaws.com/v1/repos/order-microservice
- Run the command - cd ~/.ssh
- Open the config file under ~/.aws/config and add the below entry
Host git-codecommit.*.amazonaws.com
User <APKA4DNDJF4CP4RIHLVC - key under your IAM credentials screen>
IdentityFile ~/.ssh/id_rsa
Setting up CI
- Traditional way is to set up the Jenkins server
- Jenkins is an orchestrator which performs
- clone -> compile -> test -> package -> install -> deploy
- The server should be configure with Jobs, patch the server
- There should be communication between dev and ops
Infrastructure as a Code
- Declarative way to create resources
- Rollback and exception handling
- Detect the drift
- Repetable, Reusable, Version controlled
- Ex: Terraform, CloudFormation
CloudFormation
- Sample templates by service
- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-
services-us-west-2.html
- Use template is ready
https://s3.us-west-2.amazonaws.com/cloudformation-templates-us-west-2/
VPC_Single_Instance_In_Subnet.template
Terraform
- By Hashicorp
- Opensource and vendor agnostic
- https://gitlab.com/classpathio-terraform/terraform
- EKS using terraform -
- Switch the context to the user who created the eks cluster
- aws eks --region ap-south-1 update-kubeconfig --name eks-cluster
Serverless-
- AWS manages the infrastructure
- Reduce Operational costs
- Org can focus on the primary domain
- Scalability/Availability/Resilience/Security
- Vendor Lockin
- Serverless Portfolio
- Compute
- Lambda, SQS, SNS, StepFunctions
- Storage
- S3, EFS
- Services
- DynamoDB, Cognito, API Gateway
Databases
- Regional services in AWS
- Relational
- Data is split into multiple tables
- The data is stored in rows and columns
- The data integrity is managed at the DB
- Apply normalization technique to eliminate anamolies (Creation, Updation,
deletion)
- Vertical scaling
- Strict schema to maintain integrity
- Ex: MySql, Oracle, Postgresql, Sybase
- AWS - Relational Database Service
- Umbrella project
- Compatible with MySQL, PostgreSql, Oracle
- NoSQL
- Databases that are Not Only SQL
DynamoDB
- Serverless database
- NoSQL database
- Key value database
- No fixed schema
- Can handle 10 trillion messages per day
- Supports both strong and eventual consistencies
- Data is partition using the partition key
- Optional sort key can be provided in which case, partition key and sort key acts
as a composite key
- Unlike traditional Relation databases, there is not persistent connection with
the database
OAuth 2.0
Security
- Security should be implemented using Defence in depth strategy
Levels of security
- Data
- Data should be encrypted at rest
- Use strong encryption alogorithm - AES-256
- Do not store sensitive information in the log files
- Use one-way hash functions to store sensitive information in the database
- System
- Rule of minimum previligous
- Min access to all the resources
- No unnecessary applications/softwares
- Continous patching and updates for vulnarability
- Vulnarability scanning of docker images, VMs etc
- No root access to the machines
- Network
- Only the allowed ports should be open
- Seggregate private subnets and public subnets
- Setup firewalls, security groups
- Encryption at transit, TLS-1.2, HTTPS, certificates
- Ratelimitting, timelimitting
- Application
- Authentication
- Whether you are the same person, whom you claim to be
- HTTP - 401 - UnAuthorized
- Authorization
- Do you have necessary priviligous to acees the resources
- HTTP - 403 - Forbidden
- Even if one of the layers is compromised, the next line of defence should be
even more stonger
- The strength of a chain, depends on the strength of its weakest link
- If there is a vulnaribility, then the vulnaribility WILL be exploited by
Advaisary
OAuth 2.0
- Delegation based Security framework
- The PRINCIPAL/RESOURCE_OWNER
will
DELEGATE PART_OF_RESPOSIBILITY - (read,fetch)
to a
TRUSTED_APPLICATION - (HDFC/ICICI bank)
to
PERFORM_LIMITED_ACTION - (read documents, fetch documents)
on his
BEHALF
OAuth 2.0
- It is a framework
- Grant flows depending on the type of client
- Different types of clients are supported
- Public trusted clients - (End users)
- Backend application - Deploy your applications on servers
ex: Java, Python, Nodejs
- Store confidential information
- Back channel
- Secure
- Front end applications - Mobile apps, SPA
ex: Angular/React/JS
- Front channels
- Cannot store confidential information
- insecure
Grant types
- Authorization code
- Backend public client
- Proof of Key Exchange - PKCE
- Front end public client
- Client Credentials
- service to service communication
- microservice to microservice communication
Step-1
- Auth server exposes a metadata url
- Also referred as well known url
- Public endpoint
Okta- https://dev-7858070.okta.com/oauth2/default/.well-known/oauth-
authorization-server
Cognito- https://cognito-idp.ap-south-1.amazonaws.com/ap-south-
1_zY1QIrdPp/.well-known/openid-configuration
Details:
issuer: https://dev-7858070.okta.com/oauth2/default,
https://cognito-idp.ap-south-1.amazonaws.com/ap-south-1_zY1QIrdPp
authorization_endpoint:
"https://dev-7858070.okta.com/oauth2/default/v1/authorize"
https://classpath.auth.ap-south-
1.amazoncognito.com/oauth2/authorize,
token_endpoint: "https://dev-7858070.okta.com/oauth2/default/v1/token"
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/token
registration_endpoint: "https://dev-7858070.okta.com/oauth2/v1/clients"
jwks_uri: "https://dev-7858070.okta.com/oauth2/default/v1/keys"
Step-2
Redirect the anonymous user to the Authorize endpoint of the Auth server
https://dev-7858070.okta.com/oauth2/default/v1/authorize
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/authorize
Query parameters
client_id - 0oaencvv1nJNwQEEB5d7
response_type - code - fixed
scope - openid - fixed
redirect_uri http://zoho-app/authorization-code/callback
state - 9fe74sdf91-346d-4b9b-8884-c2e59c2fcce2
http://zoho-app/authorization-code/callback?code=X9OYhYxp47-
xPOEH1vqoej92uaQcFssWupNMEZVRLAM&state=975733f8-f1c6-4eeb-918b
authorization code - 656a8038-188a-43f9-8c4c-334b561e2528&state=975733f8-f1c6-
4eeb-918b
Step-3
- Auth server will authenticate the user
- If the authentication is successful, then seeks confirmation to authorize
- If Authorized, will return back the auth_code in the response
http://localhost:8555/authorization-code/callback?code=Hb9sfCcUtmEDkPOcThlDMGCcV-
ix79P7U4PTcdW4Fz4&state=975733f8-f1c6-4eeb-918b-9bd8f9d86194 - auth code = P-
PYO8IE6SQo4ndejfD_UQ38e0h_ePFlF14Pw3kZxE0
auth code - 096ab6cf-0d9a-4771-9815-751ac56cb009
- validity is 5 minutes
- one time use
- auth code is not secure
- it is transimitted over an untrusted network
Step-4
- The client applications( Backend application) uses the auth code
- Exchanges the auth code with the access token using the POST request with the
token endpoing
- POST method
- Endpoint - token endpoint
https://dev-7858070.okta.com/oauth2/default/v1/token
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/token
- Body
grant_type - authorization_code
redirect_uri - http://localhost:8555/authorization-code/callback
code - _xOH7li8lRW13YZzKGt-WdgWwPqcCKh-lLdA-V7VeMA
- Basic authentication
client-id - 322css5uf7s76vls7030cue2vb
client-secret - lstn0tb7f87c1c11dboptlmegqpcp8los4f17uue8oleif7tjqk
Response
{
"id_token":
"eyJraWQiOiJBRFNHSElPSnU2U0UwTlhicmxyXC9TZDZ3NmVpSWNPbFwvYzdJUWdiRHpKeVE9IiwiYWxnIj
oiUlMyNTYifQ.eyJhdF9oYXNoIjoiWDlzRC1NTmJlUk9GQ3ZjZDc4VnUzdyIsInN1YiI6ImJkM2EyNDE4LT
QwNmItNDhmMy05ZTVjLTc3MjIzY2Y5MmZlZiIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwc
zpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJcmRQ
cCIsImNvZ25pdG86dXNlcm5hbWUiOiJwcmFkZWVwIiwib3JpZ2luX2p0aSI6ImZlYTI3OTQxLWQ5ODctNDJ
lNi1hNmQ0LTY4YjI1MmRhM2Y4MSIsImF1ZCI6IjMyMmNzczV1ZjdzNzZ2bHM3MDMwY3VlMnZiIiwidG9rZW
5fdXNlIjoiaWQiLCJhdXRoX3RpbWUiOjE2OTkzNTMwNDAsImV4cCI6MTY5OTM1NjY0MCwiaWF0IjoxNjk5M
zUzMDQwLCJqdGkiOiI5NWY0MTRmZS0yZmFmLTRjOGUtOGJmYi1hNDljMmY5MWVjZWUiLCJlbWFpbCI6InBy
YWRlZXAua3VtYXI0NEBnbWFpbC5jb20ifQ.bCdAiIuLhIaP-
HKZAHMkaJxuLFH5rKhKyQJ_MYW_0V3GRKcQ1Jr5Wi0pFozU3sdiG4M3jEHMo5IgtLvQ1sJ4HDabyM2D3A_6
9f4TuCr_D2UCcK0TTrIy0tiZDg32u799Xf4_F5nXKatHO_uWKYx5o446RbNbZl5Yfiu7nSidxUjk4e0Dhah
hvVXfuSuIyE-vDgCfKtnqeHt77XobfX4ylA-ldndB0iQU5pYDdEf18XIHmJQwulZ1Y4yNc-
AvSBpLXG3tWDTE_x4QutRzDhwkj0Wjzk8js6mzKwK9zniKaae9fR7qMgaZ5MPyV9QPzyuRrJlPsy7AmbwOf
7pcD9r7Ww",
"access_token":
"eyJraWQiOiJPd1ZTeGVlZlloK01mTUNuYTBrSGEwYjVxTWFxTmErbmtSazdQakVibWFVPSIsImFsZyI6Il
JTMjU2In0.eyJzdWIiOiJiZDNhMjQxOC00MDZiLTQ4ZjMtOWU1Yy03NzIyM2NmOTJmZWYiLCJpc3MiOiJod
HRwczpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJ
cmRQcCIsInZlcnNpb24iOjIsImNsaWVudF9pZCI6IjMyMmNzczV1ZjdzNzZ2bHM3MDMwY3VlMnZiIiwib3J
pZ2luX2p0aSI6ImZlYTI3OTQxLWQ5ODctNDJlNi1hNmQ0LTY4YjI1MmRhM2Y4MSIsInRva2VuX3VzZSI6Im
FjY2VzcyIsInNjb3BlIjoib3BlbmlkIiwiYXV0aF90aW1lIjoxNjk5MzUzMDQwLCJleHAiOjE2OTkzNTY2N
DAsImlhdCI6MTY5OTM1MzA0MCwianRpIjoiMzJhZDg3NWYtNzc4YS00MzRhLWJhMDgtNGI5YzlhOGY3MDc4
IiwidXNlcm5hbWUiOiJwcmFkZWVwIn0.fUQdlVFKsumK5WxDyEQLsg02hHdA5KvlHkEf3ySpWtK5U6OEPYc
WTRfS0Gd7ubqOEHN_Ns1gtecKGPNHJ3zATi_cSsPO6OnyaNWf9LlplS3UbWph70G8VGivz7Hy4A8fvm1ZZB
adEKNNKewnv8dBE5xarZ7lUxh0tt8LkxMEDhl7HInboAVBhD_UqhUmQ-
rLovm2JL35J72KhtuhW4k2QkeypWrJDo7Q4GGX7o5v7Pdrwkl2NRyY_AFNNCytnQl7KSUJ-
QTE8azgeqtcoqnzk8jNpDtAUximGCZuKLQPZWBDfvUoIEwypK9GY94AOjdRT6D1GD4gTL6AZygawxHaTg",
"refresh_token":
"eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiUlNBLU9BRVAifQ.XC4BvA3NIKkkSKqt5K0
CF7E6Cyl9Rs_x20GY83lV3gliAQDuAkYpztst7VssbLIxNu8jGmPQbaCZa3Pv3dBI6_mXXMU4_L8_TLWgp_
gAJ14StPXW5rJYMEb7lGvGr6FEGj5me1PvEVpe7ejrH7A-
Qvq8QhU8Zpwe9Pulp3TDcnBpJQDnc4VRbqzrIseLY5_HEsJ35UjBUIqzKEvaNbSoTvNyf3FdzAinlzl2K-
J2YEnXmsSHyT3bWKtdasMamYmQvMl8D0MRqk6MBnk5IxrE__4BndJm2J4ZrkSBffvdOnZy6RRacdq7ss-
hKtZLqzewlVrq1WR0hdJuwLshzUT8_A.53efmXHIGfjE-
Raf.bMLTOg5cCGDFidDspgJUe5PiaRhwCDwwVj-pwakdC1i-5tltXi4BvHW-Z_gcCDPXDlFL-
0EBNamqNstnSE7zd9DAef0lbtdSacApKKVwDxs6kMGkj68ai7NrLawaJHpIp6e8yJyheMDuEGdCn_qQblcG
PYAJ1nOpKo4ZAuHiiHfcCK53iU-JgYCxPuZtxJnHu_-mk-gNZtmBBce0g6s6qAIhYAQUrQNx5eGk-
FXlxSVYi0-L0wcTqWWWX1-
snCH74A0ke_EexgFwhcb8t6XYhTbZbIPA2xWpxAgkOsx1_2nlfMugS024tBhl4RVysjLH11p-3v_-
9AJmGht_5xvU_xyK06uB5_SgbYpeRHr29PSqecR1NLcQeFWjTh4whtygSYg31tKSgNHUd7UHhUHZAwCpzAr
82WJJNCPRhyk-IJQNIjn6kpbnB0bcb-f8n6_l-TIOVMdmLVjXSA3D0IwYwFq-
5HBDEcjhOJMHKYqugJMzc8ZumKmvzN4MwrEEeB8twDsSxneWrrqA3JjqRihDmC69tJGbWbSpBw8GqjFn675
eZm12drxkPDnW8348bAR1o14RB12aEwCpgZgiJ7kEsxAjUGak6q1S73V4tsalngIKtMO_7nkf8nn59dvPtw
76rx6tpdcjUlRNXiNl22YRSkqUuN7YvRET1UJvHJwvy10X7NNFUDVVmfvyU-
3pnhLzXHRVXQtXSxDIBScrY8aCMeT1lmOIE3EMir1jU1NZhEVb3tCcEiR1a91g_qc4exogBoO_2Ca9zolyF
7R5p-_gL6tTHnGF_XyO-
OUwlPkKTXpHaZxn9LedCBSh8PhZnfApsQQWYFdG1yyRyiuU3HfNXmiSRRYeHVtd075mjS6G8i1hQncy_IiU
hcLN1NavrKuD5CkRrTCjA9XHje-f7fkrQV3HU9jwY6SqhBaANmkbx85E-Ct4ut5As398Sx2EKc1xJGfn-
4cZcCn2VN_moayKWSHEuMGz8yYejNvhpGY2eK6AgNRjEiJwPYx_3NvBoHc-
Z1aQkG4pQAq2fUNmjIJ6f_NUacMKI6BxU-
p02SMFYLK473hnAGeK5CIM009i_MQex8xxNK6oDR2t8INP9tsAVhw8EpEyM0F9vfS0Wluvi26RO5m1vFQLV
FyzgZkePoieCwVAKr8jRIQ5IRH068PmSBegJJPA5mCxBw.GV5NA4LxOTcAGF5KgdixtA",
"expires_in": 3600,
"token_type": "Bearer"
}
JWT token
- Crytographically secure token
- Self contained token
- Not to store sensitive information
- It is a digital signature of user identiry
- short lived token
- Validate the token - https://jwt.io
Step-5
- The client application will send the token to the resource server in the
Authorization header
- The resource server will decode the token
- Extract the claims
- Allow/Deny the resource access
- 403 in case of forbidden usecas
OAuth2-client-application
codebase - git clone https://gitlab.com/10-10-vmware-microservices/ekart-oauth2-
client.git
https://gitlab.com/14-11-synechron-microservices/ekart-oauth2-client.git
dependency of oauth2-client
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-oauth2-client</artifactId>
</dependency>
start the server - port 8555
Invoke the endpoint - http://localhost:8555/api/userinfo
import boto3
import json
response = sqs.send_message(
QueueUrl=queue_url,
MessageBody=message_body
)
# Print the response
print("Message sent to SQS with MessageId:", response['MessageId'])
Save user
---
import json
import boto3
body = json.loads(event['body'])
id = body.get('id')
name = body.get('name')
city = body.get('city')
try:
# Insert data into the DynamoDB table
response = dynamodb.put_item(
TableName='users', # Replace with your table name
Item={
'id': {'S': data_to_insert['id']},
'name': {'S': data_to_insert['name']},
'city': {'S': data_to_insert['city']}
}
)
return {
'statusCode': 200,
'body': json.dumps('Data inserted successfully')
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps(f'Error: {str(e)}')
}
Request:
{
"body": "{\"id\":\"12345\",\"name\":\"vinay\",\"city\":\"Hydrabad\"}"
}
import json
import boto3
return {
'statusCode': 200,
'body': items_json
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps(f'Error: {str(e)}')
}
Update users
---
import json
import boto3
response = dynamodb.update_item(
TableName='users', # Replace with your table name
Key={
'id': {'S': user_id},
'name': {'S': user_name}
},
UpdateExpression=update_expression,
ExpressionAttributeNames=expression_attribute_names,
ExpressionAttributeValues=expression_attribute_values,
ReturnValues='ALL_NEW'
)
Delete user
---
import json
import boto3
if not user_id:
return {
'statusCode': 400,
'body': json.dumps('Invalid request. Provide user id.')
}
if not user_name:
return {
'statusCode': 400,
'body': json.dumps('Invalid request. Provide user name.')
}
API Gateway
try {
switch (event.routeKey) {
case "DELETE /items/{id}":
await dynamo
.delete({
TableName: "items",
Key: {
id: event.pathParameters.id
}
})
.promise();
body = `Deleted item ${event.pathParameters.id}`;
break;
case "GET /items/{id}":
body = await dynamo
.get({
TableName: "items",
Key: {
id: event.pathParameters.id
}
})
.promise();
break;
case "GET /items":
body = await dynamo.scan({ TableName: "items" }).promise();
break;
case "PUT /items":
let requestJSON = JSON.parse(event.body);
await dynamo
.put({
TableName: "items",
Item: {
id: requestJSON.id,
price: requestJSON.price,
name: requestJSON.name
}
})
.promise();
body = `Put item ${requestJSON.id}`;
break;
default:
throw new Error(`Unsupported route: "${event.routeKey}"`);
}
} catch (err) {
statusCode = 400;
body = err.message;
} finally {
body = JSON.stringify(body);
}
return {
statusCode,
body,
headers
};
};
OAuth 2.0
Reference Video - https://www.youtube.com/watch?v=996OiexHze0
- It is a framework
- Grant flows depending on the type of client
- Different types of clients are supported
- Public trusted clients - (End users)
- Backend application - Deploy your applications on servers
ex: Java, Python, Nodejs
- Store confidential information
- Back channel
- Secure
- Front end applications - Mobile apps, SPA
ex: Angular/React/JS
- Front channels
- Cannot store confidential information
- insecure
Grant types
- Authorization code
- Backend public client
- Proof of Key Exchange - PKCE
- Front end public client
- Client Credentials
- service to service communication
- microservice to microservice communication
Step-1
- Auth server exposes a metadata url
- Also referred as well known url
- Public endpoint
https://dev-7858070.okta.com/oauth2/default/.well-known/oauth-authorization-
server
https://cognito-idp.ap-south-1.amazonaws.com/ap-south-1_zY1QIrdPp/.well-
known/openid-configuration
Details:
issuer: https://dev-7858070.okta.com/oauth2/default,
https://cognito-idp.ap-south-1.amazonaws.com/ap-south-
1_zY1QIrdPp",
authorization_endpoint:
"https://dev-7858070.okta.com/oauth2/default/v1/authorize"
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/authorize",
token_endpoint: "https://dev-7858070.okta.com/oauth2/default/v1/token"
"https://classpath.auth.ap-
south-1.amazoncognito.com/oauth2/token",
registration_endpoint: "https://dev-7858070.okta.com/oauth2/v1/clients"
jwks_uri: "https://dev-7858070.okta.com/oauth2/default/v1/keys"
https://dev-7858070.okta.com/oauth2/default/v1/authorize
https://dev-7858070.okta.com/oauth2/default/.well-known/oauth-authorization-
server
Query parameters
client_id - bknr32pvcbmaooi8bdl2s5n32
response_type - code - fixed
scope - openid - fixed
redirect_uri http://localhost:8555/callback/url
state - 9fe74f91-346d-4b9b-8884-c2e59c2fcce2
http://zoho-app/authorization-code/callback?code=X9OYhYxp47-
xPOEH1vqoej92uaQcFssWupNMEZVRLAM&state=975733f8-f1c6-4eeb-918b
Step-4
- The client applications( Backend application) uses the auth code
- Exchanges the auth code with the access token using the POST request with the
token endpoing
- POST method
- Endpoint - token endpoint
https://dev-7858070.okta.com/oauth2/default/v1/token
https://classpath.auth.ap-south-1.amazoncognito.com/oauth2/token
- Body
grant_type - authorization_code
redirect_uri - http://zoho-app/authorization-code/callback
code - <~>
- Basic authentication
client-id - 0oaencvv1nJNwQEEB5d7
client-secret - DehMoTrVmwApbj_MvhQq5-
muFcU_U8zlqBEIUPJTtgEJrJCUZlmHn70aNw3sdeNe
Response
{
"id_token":
"eyJraWQiOiJBRFNHSElPSnU2U0UwTlhicmxyXC9TZDZ3NmVpSWNPbFwvYzdJUWdiRHpKeVE9IiwiYWxnIj
oiUlMyNTYifQ.eyJhdF9oYXNoIjoiTEFENzNjeFVEMUNPR3pPbldIMlpPdyIsInN1YiI6IjAxYTdlNWM0LT
g1OGMtNGNkYy1hODBhLTkxOTI2NGQ1NzE3MyIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwc
zpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJcmRQ
cCIsImNvZ25pdG86dXNlcm5hbWUiOiJwcmF2ZWVuIiwib3JpZ2luX2p0aSI6Ijk2N2ZjNWM3LTQxNGEtNGY
4Yi1iMzU3LTMxNDMwYjZmNmU3YSIsImF1ZCI6ImJrbnIzMnB2Y2JtYW9vaThiZGwyczVuMzIiLCJldmVudF
9pZCI6IjdhMmMyMjBiLTZhZWEtNGIzOC04NjZjLTllNzA4YTExZTAzNCIsInRva2VuX3VzZSI6ImlkIiwiY
XV0aF90aW1lIjoxNjg3MjU3OTY2LCJleHAiOjE2ODcyNjE1NjYsImlhdCI6MTY4NzI1Nzk2NiwianRpIjoi
ZjY3OTkxMjgtZjQ4NC00OTc1LThhOGYtMGIzZGI5ZmIxNTNhIiwiZW1haWwiOiJwcmFkZWVwLmt1bWFyNDR
AZ21haWwuY29tIn0.jr19FPDjQvXYdiDgN6jcg9_74hUE7ObghXQRRLVAMk_gLA2GQ3LKgLZafuNyLOxWV1
WG7OFtgj8JnXxN073fl5WxXVrjJfHO4C5mXXIxllwCYhEiJBmMwRv_bJO0XqmFnfqEGXN5QpPtFGTyNCuIK
Qej55dAnbt2ZzVJ3ZS10Lj5zPUneijY_7ARU2FchdcRQySkxi4sFJ2ACYAcDBUpySuF7x59ZQCXmmfDbld7
zuPAcyoGsBXTqMtkA10LPvxQphsVAUZ2IxOtz7_QrMHmUoPLd12_3ZawH8o7zF9jdgviAkenJSlcTVfPKWI
uTOXupT0Z_Q1wzIobMXoGrPSSXA",
"access_token":
"eyJraWQiOiJPd1ZTeGVlZlloK01mTUNuYTBrSGEwYjVxTWFxTmErbmtSazdQakVibWFVPSIsImFsZyI6Il
JTMjU2In0.eyJzdWIiOiIwMWE3ZTVjNC04NThjLTRjZGMtYTgwYS05MTkyNjRkNTcxNzMiLCJpc3MiOiJod
HRwczpcL1wvY29nbml0by1pZHAuYXAtc291dGgtMS5hbWF6b25hd3MuY29tXC9hcC1zb3V0aC0xX3pZMVFJ
cmRQcCIsInZlcnNpb24iOjIsImNsaWVudF9pZCI6ImJrbnIzMnB2Y2JtYW9vaThiZGwyczVuMzIiLCJvcml
naW5fanRpIjoiOTY3ZmM1YzctNDE0YS00ZjhiLWIzNTctMzE0MzBiNmY2ZTdhIiwiZXZlbnRfaWQiOiI3YT
JjMjIwYi02YWVhLTRiMzgtODY2Yy05ZTcwOGExMWUwMzQiLCJ0b2tlbl91c2UiOiJhY2Nlc3MiLCJzY29wZ
SI6Im9wZW5pZCIsImF1dGhfdGltZSI6MTY4NzI1Nzk2NiwiZXhwIjoxNjg3MjYxNTY2LCJpYXQiOjE2ODcy
NTc5NjYsImp0aSI6ImZmZmQ2MTBjLWZiMWItNGFhMS04ZjQ5LWM0YjdjNjRjYWUxZiIsInVzZXJuYW1lIjo
icHJhdmVlbiJ9.Fd_XL6UcuGvewg8QEC6JNjr7apXbyK9EgwzadWNDLQgEBx3puYbTkiiy9-
7pukhf5NhCcUDMeF5_cjBsKIM0MHoJ7ZeW_ErpHSbRL_dX2UeGPq88jN0bnZb-7lgwJXo2pR87YWxna-
ci7zyYumEhD9s_phU4D9V_56KKkJnULKrlpKd4N1R1xJ8Z8KYfpbNIWW5TxqQgK0cmvOW_YORcsdppBwxVH
imkzwIG9TQewaKFZBWhfsDguYn-B5vNFimK2r_5laobGgqxdsxD7NG_y-
Jsuj2e80Qdlif6JUiGjT_InPwM4b6PV8dshuf0D_lthf2l3zr1OcF4qdhN7JY6cg",
"refresh_token":
"eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiUlNBLU9BRVAifQ.eNCU5gpGSejY3c8sAFy
L1XQbv47sKNmawjHblhI_puGdJOu9tlzYPVX0LHmhzinXfyA310luuj4xQKUFDAxmaeXn62RUb1IBnvcWpt
G8rU3Zm51eaSd2DTedcdUgk7ULDfNuXneYtcNN0BIVWIQbDA-nMZ6Op-I5VL-
vKy3YEoRxi19sszSLZJWrCtOh1tfABjoSscgpSWOu3eJwYzBKvwiDirxI5Xh4uLT7zdxNjHo-
VEqUl1I4K5EGIK2yrkwLo_EH07xe_9Ubuozm7lM5mL4HfKGR6zVrW-Osr4BjoQaTFtpUnHf8GBmce0E-
9o3nRKKbu45Awyu3IzJnUI99yw.-
huLN6bs7UR3AwIR.Ph61e_7HugWf8k_9wLLlFQoypIkkFqFVQgR2ic9Sfqh0j4OTNv85FouZZioQoYDEDNw
H-lPxtcS5GFViWWkqx0Np9-cpZDcVYmEK-
FaaHKeXleHiNhlS8d8DYgyBemgckwgj5SLcEKWu7lYxJPoDiW3tVfClN_p6Oji9sNLp2_1YQwPo2X1yrTzO
5vIj7lDrrTpfhVz-
jjD9tgzkRYLINK4F7VDNJcqApTVrYF0T5Zkg0iBpSOjGIt_ZwLUMsIagszR1MTxuBL_8vQjxazopLgNJerX
8cgsOMNdxtZc9T1xiSMAAx6srIM3Vw7B_AUKJIqY_QgCoO8C6TX3RuvzB5U98m0XAfp2Mq5vOFlwD6nB2A0
3Xi400eKZ_Quc0-
JFlx9syajqo69HB8PbiOGZOKzq4IaPPQjMAmAAxrkCHIxIbhuDIOrlqmOwY5tyoCy0Olv-
Y0rfx9WKVvBzJ9lQXcxwmht_ggS5RiReMNFi8baERlYaV46gK4UVv8n-
FZ_UErUVzQ9cwzAcktAqQUvMcacD-FGStTPwR-BILDmbLpd0i72YFzrl6jtL-
5fEmvx5ew_BbgavpEDa04pnWfCFq3oVDHJTgLhYZYLdl35YjB3B-
ByTwy7hlj2xensq5qfDqpnaKfCZOcbWa52tdaXo52BC6Hr7ZP3vo7sJxWjVB0WDY5Wv5ApySZQNvyPfSsb2
jeixrov5njufalUPWXI2vfr9rrAAHemFneJ7IDgKndO8mTCUIcSsj_xLgrCFpw1ZnVqWRae6WI6Y2E14moZ
IQmW9S8axPcyUIdef9thT6dPIvEwOGFZGKx8ctaulzRVCIujU-
m6jQmNJU87FZpNQgQxNF8Pg7xbZqvY42IPSimF1adOiHLwKkAa566NDurDf95qCZI00G6hpC_eITrc2giW0
e26CjHtIuWs8IB96ZYubfxDdl7jgtD5EpDMFjEOgpOMdo6_AUzLoeoWJ62Dkm-
ChvOpAlCdy13j3mVnLP2u6FWVx5x8YlAh_MuEszEODt7bph5RAJwGxEQMK_gQ8gKy1_TwsVzdxFPE8VfIal
Sle6P34DBQjWpdllt0ILoxx-
SlfPvCpNCKrhNR0FNkutpYKLPVw0BS0jduiaenKeBNOtVXOakoHIKY1EU4KO7TdHGVjlwXkfqfFThFsQJza
hnVMWck02EOn4LkLolXZztihOpIokr9nBWYPiLxXqbXxIA975lpMbm0rrUvhSPtOe1Rz6phydvXCQOYrxfM
W75lbGXctaTSQftysr.sNCvWx4OrgjOA-K4fR8r4g",
"expires_in": 3600,
"token_type": "Bearer"
}
JWT token
- Crytographically secure token
- Self contained token
- Not to store sensitive information
- It is a digital signature of user identiry
- short lived token
- Validate the token - https://jwt.io
Step-5
- The client application will send the token to the resource server in the
Authorization header
- The resource server will decode the token
- Extract the claims
- Allow/Deny the resource access
- 403 in case of forbidden usecas
OAuth2-client-application
codebase - git clone https://gitlab.com/10-10-vmware-microservices/ekart-oauth2-
client.git
https://gitlab.com/14-11-synechron-microservices/ekart-oauth2-client.git
https://drive.google.com/drive/folders/12SiW82_ukoxZvD6EaPCLGC7bpewopdGo?
usp=sharing
https://drive.google.com/drive/folders/1KJxmx9SVkxvLmXU76jI7_cAKPwXGLzl8?
usp=drive_link
https://drive.google.com/drive/folders/198XyNqpJtb_8CDRXDc4vWfn9sC_MzeuU?
usp=sharing