Chapter
1: Introduction to Cloud Computing
Introduction
It is the on-demand delivery of computing resources through a cloud services
platform with pay-as-you-go pricing
Types of Cloud computing
IAAS (infrastructure as a service)
PAAS (Platform as a service)
SAAS (Software as a service)
Advantages of Cloud Computing
Trade capital expense for variable expense
Benefit from massive economies of scale
Stop guessing capacity
Increase speed and agility
Stop spending money on running and maintaining data centers
Go global in minutes
Cloud Computing Deployment Models
Cloud (Model in which third-party provider makes compute resources
available to the public over the internet)
Hybrid (Model that includes a mix of on-premises, private cloud and
third-party public cloud)
On-Premises (model that uses same legacy IT infrastructure runs cloud
resources within its own data center)
The Cloud Computing difference
This section compares cloud computing with the traditional environment and
reviews why these new best practices have emerged
IT Assets Become Programmable resources
Global, Available, and Unlimited Capacity
Global, Available, and Unlimited Capacity
Security Built In
The AWS cloud provides plenty of security and encryption features with
governance capabilities that enable continuous monitoring of your IT resources
AWS Cloud Architecture Design Principles
Security Built In
Systems need to be designed in such a way that they are capable of growing and
expanding over time with no drop in performance
Disposable Resources Instead of Fixed Servers
In a cloud computing environment, you can treat your servers and other
components as temporary disposable resources instead of fixed elements
Automation
One of the design best practices is to automate wherever possible to improve the
system’s stability and efficiency of the organization using various AWS
automation technologies
Loose Coupling
IT systems should ideally be designed with reduced interdependency.
Services, Not Servers
Developing large-scale applications require a variety of underlying technology
components
Databases
Relational Databases (RDBMS, SQL)
Non-Relational Databases (NO-SQL)
Data Warehouse (Amazon Redshift)
Removing Single Points of Failure
Introducing redundancy
Detect Failure
Durable Data Storage
Automated Multi-Data Center Resilience
Fault Isolation and Traditional Horizontal Scaling
Optimize for Cost
Right-Sizing
Elasticity
Take Advantage of the variety of Purchasing Options
Cashing
Application Data Cashing
Edge Cashing
Security
Utilize AWZ Feature for defense in Depth
Offload Security Responsibility to AWS
Reduce Privileged Access
Security as Code
Real-Time Auditing
AWS Global Infrastructure
Region
The region is an entirely independent and separate geographical area.
Availability-Zone
Availability zone is simply a data centre or a collection of data centres. Each
Availability zone in a Region has separate power, networking and connectivity
to reduce the chances of two zones failing simultaneously.
Edge Location
Edge Locations are AWS sites deployed in major cities and highly populated
areas across the globe. There are many more Edge locations than there are
regions.
Regional edge cache
In November 2016, AWS announced a new type of Edge Location, called a
Regional Edge Cache. These sit between your CloudFront Origin servers and the
Edge Locations
Chapter 2: Monitoring, Metrics and Analysis
Introduction
Amazon Web Services is the global leader in cloud computing that provides a
wide variety of IT services. With an extraordinary breadth of available services
to take advantage of, it is critical to monitor what you use and devise an alerting
strategy that works for your organisation. As you operate IT services in the
cloud, you are financially responsible for the AWS costs you incur. Therefore it
is essential to measure your use of AWS services. To help you identify changes
in spending, you need to establish sound financial monitoring. In this chapter,
we will explore monitoring concepts, CloudWatch basics, ways to extend
CloudWatch, create billing alerts, AWS config, how config rules can be used for
troubleshooting and ways to automate actions based on changes within your
AWS environment.
Amazon CloudWatch
Introduction
Amazon Cloud Watch is a service used for monitoring AWS cloud resources and
the applications you run on AWS. . Below is a list of all supported AWS
resources:
Amazon CloudWatch console
AWS CLI
CloudWatch API
AWS SDKs
How Amazon CloudWatch works?
Amazon CloudWatch is a metrics repository. An AWS service—such as
Amazon EC2—puts metrics into the repository, and you retrieve statistics based
on those metrics. If you put your custom metrics into the repository, you can
retrieve statistics on these metrics as well.
How long are CloudWatch metrics stored?
The AWS CloudWatch can store metrics for two weeks by default. However,
you can also get the data longer than two weeks by using the “GetMetric
Statistics API” or by using the third party resources, which are offered by AWS
partners.
One minute data points are available for 15 days
Five minutes datapoints are available for 63 days
One hour data points are available for 455 days
CloudWatch Alarms
Systems need to be designed in such a way that they are capable of growing and
expanding over time with no drop in performance
Monitoring EC2
You can monitor EC2 instances automatically using CloudWatch without
installing additional software. There are two types of monitoring available:
Basic Monitoring: Seven pre-selected metrics at a five-minute
frequency and three status check metrics at one-minute frequency, for
no additional charge.
Detailed Monitoring: All metrics available to Basic Monitoring at one-
minute frequency, for an additional charge. Instances with Detailed
Monitoring enabled allows data aggregation by Amazon EC2 AMI ID
and instance type.
AWS EC2 Metrics
Amazon EC2 sends metrics to Amazon CloudWatch. You can use the AWS
Management Console, the AWS CLI, or an API to list the metrics that Amazon
EC2 sends to CloudWatch. By default, each data point covers the previous 5
minutes of activity for instance. If you've enabled detailed monitoring, each data
point covers the previous 1 minute of activity.
Custom Metrics
If you need OS-specific metrics, such as memory and disk-related metrics, you
need to use CloudWatch custom metrics. You can publish your metrics to
CloudWatch using the AWS CLI or an API. You can view statistical graphs of
your published metrics with the AWS Management Console
EC2 Status Checks
You can monitor the status of your instances by viewing status checks and
scheduled events for your instances.
Types of Status Checks
System status checks
Instance Status checks
Monitoring EBS
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage
volumes for use with Amazon EC2 instances in the AWS Cloud. EBS allows
you to create storage volumes and attach them to Amazon EC2 instances in the
same Availability Zone
Amazon EBS Volume Types
General Purpose SSD(gp2)
Provisioned IOPS(io1)
Throughput Optimized HDD(st1)
Cold HDD(st1)
Pre-warming EBS Volumes
New EBS volumes receive their maximum performance the moment that they
are available and do not require initialization (formerly known as pre-warming).
EBS CloudWatch Metrics
Amazon Elastic Block Store (Amazon EBS) sends data points to CloudWatch
for several metrics.
EBS Volume Status Checks
Volume status checks enable you to better understand, track, and manage
potential inconsistencies in the data on an Amazon EBS volume. They are
designed to provide you with the information that you need to determine whether
your Amazon EBS volumes are impaired, and to help you control how a
potentially inconsistent volume is handled
Modifying EBS Volumes
If your Amazon EBS volume is attached to a current generation EC2 instance
type, you can increase its size, change its volume type, or (for an io1 volume)
adjust its IOPS performance, all without detaching it. You can apply these
changes to detached volumes as well.
Issue the modification command (console or command line)
Monitor the progress of the modification
If the size of the volume was modified, extend the volume’s file
system to take advantage of the increased storage capacity.
Monitoring RDS
With Amazon RDS, you can monitor network throughput, I/O for reading, write,
and metadata operations, client connections, and burst credit balances for your
DB instances.
There are two types of monitoring available for RDS:
In CloudWatch you can monitor RDS by metrics
In RDS itself, you can monitor RDS by events
Monitoring ELB
You can use CloudWatch to monitor your load balancers, analyse traffic
patterns, and troubleshoot issues with your load balancers and back-end
instances. Elastic Load Balancing publishes data points to Amazon CloudWatch
for your load balancers and your back-end instances.
Monitoring EastiCache
Amazon ElastiCache is a managed, in-memory data store service. It simplifies
and offloads the management, monitoring, and operation of in-memory cache
environments, enabling you to focus on the differentiating parts of your
applications. Amazon ElastiCache provides support for two engines,
Memcached and Redis.
Amazon ElastiCache for Redis – Manage and analyse fast moving data
with a versatile in-memory data store.
Amazon ElastiCache for Memcached – Build a scalable Caching Tier
for data-intensive apps.
AWS Organizations
AWS Organizations is an account management service that allows you to
consolidate multiple AWS accounts into an organisation, enabling you to create
a hierarchical structure that can be managed centrally.
Key Features of AWS Organizations
Group-based account management
Policy framework for multiple AWS account
API level control of AWS service
Account creation and management APIs
Consolidated Billing
Enable only Consolidate billing features
Consolidate Billing
Consolidated billing has the following key benefits:
One Bill – Get one bill for multiple accounts.
Easy Tracking – Easily track each account's charges.
Combined Usage – Combine usage from all accounts in the
organisation results in volume discounts.
Monitor Charges Using Billing Alarms
You can monitor your estimated AWS charges using Amazon CloudWatch.
When you enable the monitoring of estimated charges for your AWS account,
the estimated charges are calculated and sent several times daily to CloudWatch
as metric data. This data includes the estimated charges for every service in
AWS that you use, in addition to the estimated overall total of your AWS
charges. The alarm triggers when your account billing exceeds the threshold you
specify. It triggers only when actual billing exceeds the threshold.
Enable Billing Alerts
Before you can create an alarm for your estimated charges, you must enable
billing alerts, so that you can monitor your estimated AWS charges and create an
alarm using billing metric data.
Create Billing Alarm
After you've enabled billing alerts, you can create a billing alarm. In this
procedure, you create an alarm that sends an email message when your estimated
charges for AWS exceed a specified threshold.
Check the Alarm Status
You can check the status of your billing alarm.
Delete a Billing Alarm
You can delete your billing alarm when you no longer need it.
Cost Optimization
By following a few simple steps, you can effectively control your AWS costs:
Right-size your services to meet capacity needs at the lowest cost.
Save money when you reserve.
Use the spot market.
Monitor and track service usage.
Use Cost Explorer to optimise savings.
On-Demand Instance
With on-demand instances, you pay for computing capacity by the hour, with no
minimum commitments required.
Reserve Instance
Reserved Instances allow you to reserve computing capacity in advance for
long-term savings. It provides significant discounts (up to 60 per cent) compared
to On-Demand Instance pricing.
Spot Instance
You can bid for unused Amazon Elastic Compute Cloud (Amazon EC2)
capacity. Instances are charged at Spot Price, which is set by Amazon EC2 and
fluctuates, depending on supply and demand. If your bid exceeds the current
Spot Price, your requested instances will run until either you terminate them or
the Spot Price increases above your bid.
Chapter 3: High Availability
Introduction
Amazon Web Services provides services and infrastructure to build reliable,
fault-tolerant, and highly available systems in the cloud. Most of the higher-level
services, such as Amazon Simple Storage Service (S3), Amazon SimpleDB,
Amazon Simple Queue Service (SQS), and Amazon Elastic Load Balancing
(ELB), have been built with fault tolerance and high availability in mind.
Fault Tolerance and High Availability
Load balancing is an effective way to increase the availability of a system.
Instances that fail can be replaced seamlessly behind the load balancer while
other instances continue to operate. Elastic Load Balancing can be used to
balance across instances in multiple availability zones of a region.
Elasticity and Scalability
Elasticity
Elasticity is primarily the creation of virtual machines to meet the real-time
requirements of resources in Cloud Computation.
Scalability
Scalability is mainly used to handle the increasing work on the application layer
and also to make a place for that growing data in the system
There are two ways to scale an IT architecture:
Vertically
Horizontally.
Amazon Relational Database Service (RDS)
Amazon RDS makes it easy to set up, operate and scale the relational database in
the cloud. When you do time to consume administrative tasks on the cloud such
as hardware establishment, database setup, recovery and backups. It offers cost-
efficient and resizable capacity. By using Amazon RDS, you are free to focus on
your applications so that you can give them the fast performance, high
availability, security and compatibility they required.
There are several database engines on which Amazon RDS can be used which
include the followings;
Aurora.
MySQL.
Oracle.
PostgreSQL.
RDS Multi Failover
Amazon RDS Multi-AZ deployments provide improved availability
and durability for Database (DB) Instances.
When you establish a Multi-AZ DB Instance, Amazon RDS creates a
primary DB Instance by itself and replicates the data synchronously to
a backup instance in a different Availability Zone (AZ).
Each AZ runs on its own physically distinct, independent
infrastructure, and is designed to be highly reliable etc.
Failover Conditions
Amazon RDS identifies and automatically recovers from the most common
failure scenarios for Multi-AZ deployments so that you can continue database
operations as quickly as possible without administrative intervention. Amazon
RDS automatically performs a failover in the event of any of the following:
Loss of availability in primary Availability Zone
Damage of network connectivity to the primary
Compute unit failure on the primary
Storage Failiour on primary
RDS Using Read Replicas
Amazon RDS Read Replicas provide enhanced performance and durability for
database (DB) instances. This feature makes it easy to elastically scale out
beyond the capacity limitations of a single DB Instance for read-heavy database
workloads. You can create one or more replicas of a given source DB Instance
and serve high-volume application read traffic from several copies of your data,
thereby increasing total read throughput. Read replicas can also be promoted
when needed to become standalone DB instances. Read replicas are available in
Amazon RDS for MySQL, MariaDB, and PostgreSQL as well as Amazon
Aurora.
Bastion Host and High Availability
Introduction
To provide secure access to Linux instances that are located in the private and
public subnets “Bastion Host” is used. The architecture that is used for this
purpose is Quick Start architecture which deploys Linux bastion into every
public subnet to give the environment the readily accessible administrative
access.
Deployment Steps
Prepare AWS account
Launch the stack
Add AWS service
Troubleshooting and Potential Auto Scaling:
Following are the reasons if your instances are not launching into auto-scaling
groups;
Autoscaling Configuration is not working properly.
Security group does not exist.
Autoscaling group does not found.
AZ is no longer supported.
Invalid Device EBS Mapping.
Autoscaling service is not enabled on your computer.
Associated Key Pair does not exist.
Attempting to attach and EBS block device to an instance-store AMI.
Instance type specified is not supported in the AZ.
Chapter 4: Deployment and Provisioning
Introduction
Amazon Web Services offers multiple options for provisioning your IT
infrastructure and the deployment of your applications. Whether it is a simple
three-tier application or a complex set of workloads, the deployment model
varies from customer to customer.
AWS Deployment Services
When it comes to deployment services, AWS has multiple options too
AWS Elastic BeanStalk
AWS Elastic Beanstalk is the fastest and most straightforward way to get an
application up and running on AWS.
AWS CloudFormation
AWS CloudFormation provides the sysadmin, network architect, and other IT
personnel the ability to provision and manage stacks of AWS resources based on
templates you create to model your infrastructure architecture
AWS OpsWorks
AWS OpsWorks is an application management service that makes it easy for
both developers and operations personnel to deploy and operate applications of
all shapes and sizes
OpsWorks
AWS OpsWorks is an application management service that makes it easy for
both developers and operations personnel to deploy and operate applications of
all shapes and sizes
Cloud-based applications usually require a group of related resources such as
application servers, database servers etc. that must be created and managed
collectively. This collection of instances is called a stack. AWS OpsWorks
provides a simple and straightforward way to create and manage stacks and their
associated applications and resources.
Chef
Chef turns infrastructure into code. With Chef, you can automate how you build,
deploy, and manage your system’s infrastructure. Your system becomes as
version-able, testable, and repeatable as application code.
Root/Admin Access Services
Services with root/admin access to operating system include the following four
Elastic BeanStalk
Elastic MapReduce
OpsWorks
EC2
Remember that you do not have root/admin access to RDS, DynamoDB, S3 or
Glacier.
Elastic Load Balancing
Introduction
Elastic Load Balancing (ELB) automatically distributes incoming application
traffic across multiple EC2 instances. It seamlessly provides necessary load
balancing capacity required for application traffic distribution so that you can
achieve higher levels of fault tolerance in your applications.
ELB Configurations
A load balancer accepts incoming traffic from clients and routes requests to its
registered targets (such as EC2 instances) in one or more Availability Zones.
Configuration Types
External Elastic Load Balancer
Internal Elastic Load Balancer
Sticky Sessions
By default, a Load Balancer routes each request independently to the registered
instance with the smallest load. However, you can use the sticky session feature
(also known as session affinity), which enables the load balancer to bind a user's
session to a specific instance. This ensures that all requests from the user during
the session are sent to the same instance.
Sticky Session Types
Duration-Based Session Stickiness
Application-Controlled Session Stickiness
Pre-Warming Elastic Load Balancer
AWS staff has made a solution to handle this problem in which they pre-
configure the Load Balancer so that the Load Balancers has the appropriate level
of traffic according to the incoming traffic. This method of pre-configuration is
known as “Pre-Warming” a Load Balancer.
Chapter 5: Data Management
Disaster Recovery
Disaster recovery (DR) is about preparing for and recovering from a disaster. A
disaster is an event that has a negative impact on a company’s business
continuity or finances. DR includes hardware or software failure, a network
outage, a power outage, physical damage to a building like fire or flooding,
human error, or some other significant event.
Traditional Approaches to DR
Facilities to house the infrastructure, including power and cooling.
Security to ensure the physical protection of assets.
Suitable capacity to scale the environment.
Support for repairing, replacing, and refreshing the infrastructure.
Contractual agreements with an Internet service provider (ISP) to
provide Internet connectivity that can sustain bandwidth utilisation for
the environment under a full load.
Network infrastructure such as firewalls, routers, switches, and load
balancers.
Enough server capacity to run all mission-critical services, including
storage appliances for the supporting data, and servers to run
applications and backend services such as user authentication, Domain
Name System (DNS),
Dynamic Host Configuration Protocol (DHCP), monitoring, and
alerting.
Using AWS for DR
AWS services and features can leverage for your disaster recovery (DR)
processes to significantly minimise the impact on your data, your system, and
your overall business operations.
AWS Features and Services Essential for Disaster Recovery
In the preparation phase of DR, it is essential to consider the use of services and
features that support data migration and durable storage, because they enable
you to restore backed-up, critical data to AWS when disaster strikes. For some
of the scenarios that involve either a scaled-down or a fully scaled deployment
of your system in AWS, compute resources will be required as well.
Region
Amazon Web Services are available in multiple regions around the globe, so you
can choose the most appropriate location for your DR site, in addition to the
place where your system is fully deployed.
Storage
Amazon Simple Storage Service(s3)
Amazon Glacier
Amazon Elastic Block Store
Amazon Import/Export
Amazon Storage Gateway
Gateway-Cached Volumes
Gateway-Stored Volumes
Gateway-Virtual Tape Library
Compute
Amazon EC2
Amazon EC2 VM import Connector
Networking
Amazon Rout 53
Elastic Load Balancing
Amazon Virtual Private
Amazon Direct Connect
Database
Amazon Relational Database
Amazon DynamoDB
Amazon Redshift
Deployment Orchestration
Deployment automation and post-startup software installation/configuration
processes and tools can be used in Amazon EC2. This can be very helpful in the
recovery phase, enabling you to create the required set of resources in an
automated way.
AWS BeanStalk
AWS OpsWorks
RTO and RPO
The two common industry terms for disaster planning includes:
Recovery time objective (RTO) — is the length of time from which
you can recover from a disaster. It is measured from when the crash
first occurred as to when you have fully recovered from it.
Recovery point objective (RPO) — is the amount of data your
organisation is prepared to lose in the event of a disaster.
Disaster Recovery Scenarios with AWS
This section outlines four DR scenarios that highlight the use of AWS and
compare AWS with traditional DR methods. The following figure shows a
spectrum for the four scenes, arranged by how quickly a system can be available
to users after a DR event.
Backup and Restore
In most traditional environments, data is backed up to tape and sent off-site
regularly. If you use this method, it can take a long time to restore your system
in the event of a disruption or disaster. Amazon S3 is an ideal destination for
backup data that might be needed quickly to perform a restore. Transferring data
to and from Amazon S3 is typically done through the network and is therefore
accessible from any location.
Pilot Light for Quick Recovery
The term pilot light is often used to describe a DR scenario in which a minimal
version of an environment is always running in the cloud. The idea of the pilot
light is an analogy that comes from the gas heater. In a gas heater, a small flame
that’s always on can quickly ignite the entire furnace to heat up a house
Warm Standby Solutions
The term warm standby is used to describe a DR scenario in which a scaled-
down version of a fully functional environment is always running in the cloud. A
warm standby solution extends the pilot light elements and preparation. It further
decreases the recovery time because some services are always running. By
identifying your business-critical systems, you can fully duplicate these systems
on AWS and have them always on.
Multi-Site Solution
A multi-site solution runs in AWS as well as on your existing on-site
infrastructure, in an active-active configuration. The data replication method that
you employ will be determined by the recovery point that you choose. You can
use a DNS service that supports weighted routing, such as Amazon Route 53, to
route production traffic to different sites that deliver the same application or
service. A proportion of traffic will go to your infrastructure in AWS, and the
remainder will go to your on-site infrastructure.
Failing Back from Disaster
The following steps outline the different fail-back approaches:
Backup and Restore
Pilot Light, Warm Standby and Multi-size
Chapter 6: Security
Security Token Service(STS)
The AWS Security Token Service (STS) is a web service that enables you to
request temporary, limited-privilege credentials for AWS Identity and Access
Management (IAM) users or for users that you authenticate (federated users.
Identity Federation
You can manage your user identities in an external system outside of AWS and
grant users who sign in from those systems access to perform AWS tasks and
access your AWS resources. IAM supports two types of identity federation.
Enterprise Identity Federation
Web Identity Federation
Roles for Cross Account Access
Many organisations maintain more than one AWS account. Using roles and
cross-account access, you can define user identities in one account, and use those
identities to access AWS resources in other accounts that belong to your
organisation.
Roles for Amazon EC2
If you run applications on Amazon EC2 instances and those applications need
access to AWS resources, you can provide temporary security credentials to your
instances when you launch them. These temporary security credentials are
available to all applications that run on the instance, so you don't need to store
any long-term credentials on the instance.
AWS Shared Responsibility Model
The management of the security in the cloud is slightly different from the
security in the on-premises data centre. Migrating computer systems and data to
the cloud requires AWS and customers to work together towards security
objectives. The security responsibilities become shared between the user and the
cloud service provider. Under this shared responsibility model, AWS is
responsible for securing the underlying infrastructure that supports the cloud,
and the user is accountable for anything deployed in the cloud or connects to the
cloud.
AWS Security Responsibilities
AWS operates, manages, and controls the components of the host operating
system and virtualisation layer
down to the physical security of the facilities in which the services are operated.
Therefore, AWS is responsible for securing their whole global infrastructure
including foundational compute, storage, networking and database services, as
well as higher-level services.
Customer Security Responsibility
As AWS customers retain control over their data, they consequently hold the
responsibilities relating to that content as part of the AWS “shared
responsibility” model. Their duty is to protect the confidentiality, integrity, and
availability of their data in the cloud. They undertake responsibility for the
management of their operating system (including updates and security patches),
other associated application software, as well as the configuration of the AWS-
provided security group firewall. AWS provides a range of security services and
features that AWS customers can use to secure their assets.
AWS Global Infrastructure Security
The AWS global infrastructure is one of the most flexible and secure cloud
computing platform present today. It is designed to offer an exceptionally
scalable, highly reliable platform that facilitates customers in deploying
applications and data swiftly and securely.
The infrastructure includes the services, network, hardware, and operational
software (e.g., host OS, virtualisation software, etc.) that supports the
provisioning and use of computing resources.
AWS Compliance Program
AWS computing environments are continuously audited, with certifications from
accreditation bodies across geographies and verticals, including ISO 27001,
FedRAMP, DoD CSM, and PCI DSS. By operating in an accredited
environment, customers reduce the scope and cost of audits they need to
perform. AWS continuously undergoes assessments of its underlying
infrastructure including the physical and environmental security of its hardware
and data centres so customers can take advantage of those certifications and
merely inherent those controls. Following are the programs that AWS have
regarding Compliance. They are divided into three areas:
Certification/Attestation
Laws, Regulation and Privacies
Alignments and Frameworks
Physical and Environmental Security
AWS’s data centres are state of the art, utilising innovative architectural and
engineering approaches. AWS data centres are housed in nondescript facilities.
Physical access is strictly controlled both at the perimeter and at building ingress
points by professional security staff utilising video surveillance, intrusion
detection systems, and other electronic means.
Network Security
The AWS network has been architected to permit you to select the level of
security and resiliency appropriate for your workload. To enable you to build
geographically dispersed, fault-tolerant web architectures with cloud resources,
AWS has implemented a world-class network infrastructure that is carefully
monitored and managed.
AWS Account Security Features
AWS provides a variety of tools and features that you can use to keep your AWS
Account and resources safe from unauthorised use. This includes credentials for
access control, HTTPS endpoints for encrypted data transmission, the creation of
separate IAM user accounts, user activity logging for security monitoring, and
Trusted Advisor security checks.
AWS Credentials
To help ensure that only authorised users and processes access your AWS
Account and resources, AWS uses several types of credentials for authentication.
AWS Trusted Advisor Security Checks
Trusted Advisor inspects your AWS environment and makes recommendations
when opportunities may exist to save money, improve system performance, or
close security gaps.
Amazon EC2 Security
Amazon Elastic Compute Cloud (EC2) is a key component in Amazon’s
Infrastructure as a Service (IaaS), providing resizable computing capacity using
server instances in AWS’s data centres.
Multiple Level of Security
The Hypervisor
Instance Isolation
Amazon EBS Security
Encryption of sensitive data is generally a good security practice, and AWS
provides the ability to encrypt EBS volumes and their snapshots with AES-256.
The encryption occurs on the servers that host the EC2 instances, providing
encryption of data as it moves between EC2 instances and EBS storage.
Amazon ELB Security
Amazon Elastic Load Balancer provides several security benefits:
Takes over the encryption and decryption work from the Amazon EC2
instances and manages it centrally on the load balancer
Offers clients a single point of contact, and can also serve as the first
line of defence against attacks on your network
When used in an Amazon VPC, supports the creation and management
of security groups associated with your Elastic Load Balancing to
provide additional networking and security options etc.
AWS Direct Connect Security
With Direct Connect, you bypass Internet service providers in your network
path. You can procure rack space within the facility housing the AWS Direct
Connect location and deploy your equipment nearby. Once implemented, you
can connect this equipment to AWS Direct Connect using a cross-connect
Auditing on AWS
You should periodically audit your security configuration to make sure it meets
your current business needs. An audit gives you an opportunity to remove
unneeded IAM users, roles, groups, and policies, and to make sure that your
users and software have only the permissions that are required. Your
organisation may undergo an audit. This could be for PCI Compliance, ISO
27001, SOC, etc. There is a level of shared responsibility in regards to
inspections:
Chapter 7: Networking
Domain Name System(DNS)
It converts human-friendly domain names into an Internet Protocol (IP) address.
These IP addresses are used by the computers and networking devices to identify
each other on the network
Internet Protocol(IP)
Two versions of the Internet Protocol are in frequent use in the Internet today,
IPv4 and IPv6:
An IPv4 address has a size of 32 bits, which limits the address space to
4294967296 (232) addresses. Of this number, some addresses are
reserved for special purposes such as private networks (~18 million
addresses) and multicast addressing (~270 million addresses).
In IPv6, the address size was increased from 32 bits in IPv4 to 128 bits
or 16 octets, thus providing up to 2128 (approximately 3.403×1038)
addresses. This is deemed sufficient for the foreseeable future.
Top Level Domain(TLD)
A domain name consists of one or more parts, technically called labels, that are
conventionally concatenated, and delimited by dots, such as example.com. The
right-most label conveys the top-level domain; for example, the domain name
www.example.com belongs to the top-level domain com
Domain Name Registration
The right to use a domain name is delegated by domain name registrars who are
accredited by the Internet Corporation for Assigned Names and Numbers
(ICANN) or other organisations such as OpenNIC, that is charged with
overseeing the name and number systems of the Internet.
DNS Records
The most common types of records stored in the DNS database are for Start of
Authority (SOA), name servers (NS), IP addresses (A Records), domain name
aliases (CNAME).
Start of Authority(SOA)
A start of authority (SOA) record is information stored in a domain name system
(DNS) zone about that zone and other DNS records
Name Servers(NS)
NS stands for Name Server records and is used by Top Level Domain servers to
direct traffic to the Content DNS server that contains the authoritative DNS
records.
A Records
The A record is used by a computer to translate the name of the domain to the IP
address. For example, http://www.ipspecialist.net might point to
http://123.10.10.80.
CNAMES
A Canonical Name (CName) can be used to resolve one domain name to
another.
Time to Live(TTL)
The length of time a DNS record is cached on either the Resolving Server or the
users own local PC is the ‘Time To Live’ (TTL) in seconds. The lower the Time
To Live (TTL), the faster the changes to DNS records take to propagate
throughout the internet.
Alias Records
Alias records are used to map resource record sets in your hosted zone to Elastic
Load Balancers, CloudFront distributions, or S3 buckets that are configured as
websites.
Introduction to Route 53
Amazon Route 53 provides highly available and scalable cloud DNS web
service that effectively connects user requests to infrastructure running in AWS
such as EC2 instances, Elastic Load Balancers, or Amazon S3 buckets.
DNS Management
If you already have a domain name, such as example.com, Route 53 can tell the
Domain Name System (DNS) where on the internet to find web servers, mail
servers, and other resources for your domain.
Traffic Management
Route 53 traffic flow provides a visual tool that you can use to create and update
sophisticated routing policies to route end users to multiple endpoints for your
application
Availability Monitoring
Route 53 can monitor the health and performance of your application as well as
your web servers and other resources.
Domain Registration
If you need a domain name, you can find an available name and register it by
using Route 53. Amazon Route 53 will automatically configure DNS settings for
your domains.
Introduction to VPC
Amazon VPC lets you provision a logically isolated section of the AWS cloud
where you can launch AWS resources in a virtual network that you define. You
have complete control over your virtual networking environment, including a
selection of your IP address ranges, the creation of subnets, and configuration of
route tables and network gateways
Features and Benefits
Multiple Connectivity options
Secure
Simple
Scalability and reliability
Components of Amazon VPC
Virtual Private Cloud
Subnet
Internet Gateway
Nat Gateway
Hardware VPN Connection
Virtual Private Gateway
Customer Gateway
Router
Peering Connection
VPC EndPoints
Egress-only internet Gateway
VPC Configuration Scenarios
VPC with a Single Public Subnet
VPC with Public and Private Subnets (NAT)
VPC with Public and Private Subnets and Hardware VPN Access
VPC with a Private Subnet and Hardware VPN Access
VPC Connectivity Options
Amazon VPC provides multiple network connectivity options for you to
leverage depending on your current network designs and requirements. These
connectivity options include leveraging either the internet or an AWS Direct
Connect connection as the network backbone and terminating the connection
into either AWS or user-managed network endpoints
Network-to-Amazon VPC Connectivity Options
AWS managed VPN
AWS Direct Connect
AWS Direct Connect Plus VPN
AWS VPN CloudHub
Software VPN
Transit VPC
Amazon VPC-to-Amazon VPC Connectivity Options
VPC peering
Software VPN
Software-to-AWS managed VPN
AWS Direct Connect
AWS Private Link
Internal User-to-Amazon VPC Connectivity Options
Software Remote-Access VPN
Transit VPC
Hardware VPN
Amazon VPC provides the option of creating an IPsec, hardware VPN
connection between remote customer networks and their Amazon VPC over the
Internet.
AWS Direct Connect
AWS Direct Connect makes it easy to establish a dedicated connection from an
on-premises network to Amazon VPC. Using AWS Direct Connect, you can
create private connectivity between AWS and your data centre, office, or
colocation environment.
Software VPN
Amazon VPC offers you the flexibility to fully manage both sides of your
Amazon VPC connectivity by creating a VPN connection between your remote
network and a software VPN appliance running in your Amazon VPC network.
This option is recommended if you must manage both ends of the VPN
connection either for compliance purposes or for leveraging gateway devices
that are not currently supported by Amazon VPC’s hardware VPN solution.
AWS Direct Connect+ VPN
With AWS Direct Connect + VPN, you can combine one or more AWS Direct
Connect dedicated network connections with the Amazon VPC hardware VPN.
This combination provides an IPsec-encrypted private connection that also
reduces network costs, increases bandwidth throughput, and offers a more
consistent network experience than Internet-based VPN connections.
AWS VPN CloudHub
The AWS VPN CloudHub operates on a simple hub-and-spoke model that you
can use with or without a VPC. Use this design if you have multiple branch
offices and existing Internet connections and would like to implement a
convenient, potentially low-cost hub-and-spoke model for primary or backup
connectivity between these remote offices.