In a cloud-based deployment model, you can migrate existing applications to the cloud, or you can
design and build new applications in the cloud. You can build those applications on low-level
infrastructure that requires your IT staff to manage them. Alternatively, you can build them using
higher-level services that reduce the management, architecting, and scaling requirements of the core
infrastructure.
On-premises deployment is also known as a private cloud deployment. In this model, resources are
deployed on premises by using virtualization and resource management tools.
For example, you might have applications that run on technology that is fully kept in your on-premises
data center. Though this model is much like legacy IT infrastructure, its incorporation of application
management and virtualization technologies helps to increase resource utilization.
In a hybrid deployment, cloud-based resources are connected to on-premises infrastructure. You might
want to use this approach in a number of situations. For example, you have legacy applications that are
better maintained on premises, or government regulations require your business to keep certain records
on premises.
For example, suppose that a company wants to use cloud services that can automate batch data
processing and analytics. However, the company has several legacy applications that are more suitable
on premises and will not be migrated to the cloud. With a hybrid deployment, the company would be
able to keep the legacy applications on premises while benefiting from the data and analytics services
that run in the cloud.
Benefits:
• Trade upfront expense for variable expense
• Stop spending money to run and maintain data centers
• Stop guessing capacity
• Benefit from massive economies of scale
• Increase speed and agility
• Go global in minutes
EC2 Instance Types
• General purpose instances
• Compute optimized instances
• Memory optimized instances
• Accelerated computing instances - Accelerated computing instances use hardware accelerators,
or coprocessors, to perform some functions more efficiently than is possible in software running
on CPUs. Examples of these functions include floating-point number calculations, graphics
processing, and data pattern matching. In computing, a hardware accelerator is a component
that can expedite data processing. Accelerated computing instances are ideal for workloads
such as graphics applications, game streaming, and application streaming.
• Storage optimized instances
EC2 Pricing
• On-Demand
• Amazon EC2 Savings Plans
• Reserved Instances
• Spot Instances
• Dedicated Hosts
Auto-Scaling
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and predictive scaling.
Dynamic scaling responds to changing demand. Predictive scaling automatically schedules the right
number of Amazon EC2 instances based on predicted demand.
Elastic Load Balancing
Low-demand period - Here’s an example of how Elastic Load Balancing works. Suppose that a few
customers have come to the coffee shop and are ready to place their orders. If only a few registers are
open, this matches the demand of customers who need service. The coffee shop is less likely to have
open registers with no customers. In this example, you can think of the registers as Amazon EC2
instances.
High-demand period - Throughout the day, as the number of customers increases, the coffee shop
opens more registers to accommodate them. In the diagram, the Auto Scaling group represents this.
Additionally, a coffee shop employee directs customers to the most appropriate register so that the
number of requests can evenly distribute across the open registers. You can think of this coffee shop
employee as a load balancer.
The AWS Well-Architected Framework
The AWS Well-Architected Framework helps you understand how to design and operate reliable, secure,
efficient, and cost-effective systems in the AWS Cloud. It provides a way for you to consistently measure
your architecture against best practices and design principles and identify areas for improvement. The
Well-Architected Framework is based on five pillars:
• Operational excellence
• Security
• Reliability
• Performance efficiency
• Cost optimization
Advantages of cloud computing
Operating in the AWS Cloud offers many benefits over computing in on-premises or hybrid
environments:
• Trade upfront expense for variable expense.
• Benefit from massive economies of scale.
• Stop guessing capacity.
• Increase speed and agility.
• Stop spending money running and maintaining data centers.
• Go global in minutes.
Monolithic Applications
Applications are made of multiple components. The components communicate with each other to
transmit data, fulfill requests, and keep the application running. Suppose that you have an application
with tightly coupled components. These components might include databases, servers, the user
interface, business logic, and so on. This type of architecture can be considered a monolithic application.
In this approach to application architecture, if a single component fails, other components fail, and
possibly the entire application fails.
Microservices
In a microservices approach, application components are loosely coupled. In this case, if a single
component fails, the other components continue to work because they are communicating with each
other. The loose coupling prevents the entire application from failing.
When designing applications on AWS, you can take a microservices approach with services and
components that fulfill different functions. Two services facilitate application integration: Amazon
Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).
Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service. Using Amazon SNS
topics, a publisher publishes messages to subscribers. This is similar to the coffee shop; the cashier
provides coffee orders to the barista who makes the drinks.
In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several
other options.
Amazon Simple Queue Service (Amazon SQS) is a message queuing service. Using Amazon SQS, you can
send, store, and receive messages between software components, without losing messages or requiring
other services to be available. In Amazon SQS, an application sends messages into a queue. A user or
service retrieves a message from the queue, processes it, and then deletes it from the queue.
Containers
In AWS, you can also build and run containerized applications. Containers provide you with a standard
way to package your application's code and dependencies into a single object. You can also use
containers for processes and workflows in which there are essential requirements for security,
reliability, and scalability.
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container
management system that enables you to run and scale containerized applications on AWS.
Amazon ECS supports Docker containers. Docker is a software platform that enables you to build, test,
and deploy applications quickly. AWS supports the use of open-source Docker Community Edition and
subscription-based Docker Enterprise Edition. With Amazon ECS, you can use API calls to launch and
stop Docker-enabled applications.
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run
Kubernetes on AWS.
Kubernetes is open-source software that enables you to deploy and manage containerized applications
at scale. A large community of volunteers maintains Kubernetes, and AWS actively works together with
the Kubernetes community. As new features and functionalities release for Kubernetes applications, you
can easily apply these updates to your applications managed by Amazon EKS.
AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon
EKS. When using AWS Fargate, you do not need to provision or manage servers. AWS Fargate manages
your server infrastructure for you. You can focus more on innovating and developing your applications,
and you pay only for the resources that are required to run your containers.
GLOBAL INFRASTRUCTURE
When determining the right Region for your services, data, and applications, consider the following four
business factors:
• Compliance with data governance and legal requirements
• Proximity to your customers
• Available services within a Region
• Pricing
An Availability Zone is a single data center or a group of data centers within a Region. Availability Zones
are located tens of miles apart from each other. This is close enough to have low latency (the time
between when content requested and received) between Availability Zones. However, if a disaster
occurs in one part of the Region, they are distant enough to reduce the chance that multiple Availability
Zones are affected.
Accessing through SDKs instead of console and CLI. Another option for accessing and managing AWS
services is the software development kits (SDKs). SDKs make it easier for you to use AWS services
through an API designed for your programming language or platform. SDKs enable you to use AWS
services with your existing applications or create entirely new applications that will run on AWS.
AWS Elastic Beanstalk
With AWS Elastic Beanstalk, you provide code and configuration settings, and Elastic Beanstalk deploys
the resources necessary to perform the following tasks:
• Adjust capacity
• Load balancing
• Automatic scaling
• Application health monitoring
AWS CloudFormation
With AWS CloudFormation, you can treat your infrastructure as code. This means that you can build an
environment by writing lines of code instead of using the AWS Management Console to individually
provision resources.
AWS CloudFormation provisions your resources in a safe, repeatable manner, enabling you to frequently
build your infrastructure and applications without having to perform manual actions or write custom
scripts. It determines the right operations to perform when managing your stack and rolls back changes
automatically if it detects errors.
Amazon Route 53 is a DNS web service. It gives developers and businesses a reliable way to route end
users to internet applications hosted in AWS. Amazon Route 53 connects user requests to infrastructure
running in AWS (such as Amazon EC2 instances and load balancers). It can route users to infrastructure
outside of AWS. Another feature of Route 53 is the ability to manage the DNS records for domain
names. You can register new domain names directly in Route 53. You can also transfer DNS records for
existing domain names managed by other domain registrars. This enables you to manage all of your
domain names within a single location. In the previous module, you learned about Amazon CloudFront,
a content delivery service. The following example describes how Route 53 and Amazon CloudFront work
together to deliver content to customers.
Instance stores
Block-level storage volumes behave like physical hard drives. An instance store provides temporary
block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically
attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance.
When the instance is terminated, you lose any data in the instance store.
Amazon Elastic Block Store
Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage volumes that you
can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on
the attached EBS volume remains available.
To create an EBS volume, you define the configuration (such as volume size and type) and provision it.
After you create an EBS volume, it can attach to an Amazon EC2 instance.
Because EBS volumes are for data that needs to persist, it’s important to back up the data. You can take
incremental backups of EBS volumes by creating Amazon EBS snapshots.
An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies all
the data. For subsequent backups, only the blocks of data that have changed since the most recent
snapshot are saved.
Incremental backups are different from full backups, in which all the data in a storage volume copies
each time a backup occurs. The full backup includes data that has not changed since the most recent
backup.
S3 Storage Classes
• S3 Standard
• S3 Standard-Infrequent Access (S3 Standard-IA)
• S3 One Zone-Infrequent Access (S3 One Zone-IA)
• S3 Intelligent-Tiering - Ideal for data with unknown or changing access patterns. Requires a small
monthly monitoring and automation fee per object. In the S3 Intelligent-Tiering storage class,
Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30
consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-
IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the
frequent access tier, S3 Standard.
• S3 Glacier
• S3 Glacier Deep Archive - Lowest-cost object storage class ideal for archiving. Able to retrieve
objects within 12 hours. When deciding between Amazon S3 Glacier and Amazon S3 Glacier
Deep Archive, consider how quickly you need to retrieve archived objects. You can retrieve
objects stored in the S3 Glacier storage class within a few minutes to a few hours. By
comparison, you can retrieve objects stored in the S3 Glacier Deep Archive storage class within
12 hours.
File storage
In file storage, multiple clients (such as users, applications, servers, and so on) can access data that is
stored in shared file folders. In this approach, a storage server uses block storage with a local file system
to organize files. Clients access data through file paths.
Compared to block storage and object storage, file storage is ideal for use cases in which a large number
of services and resources need to access the same data at the same time.
Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud services and on-
premises resources. As you add and remove files, Amazon EFS grows and shrinks automatically. It can
scale on demand to petabytes without disrupting applications.
Amazon EFS is a regional service. It stores data in and across multiple Availability Zones.
The duplicate storage enables you to access data concurrently from all the Availability Zones in the
Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using
AWS Direct Connect.
An Amazon EBS volume stores data in a single Availability Zone.
To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume
must reside within the same Availability Zone.
Amazon Redshift
Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers the
ability to collect data from many sources and helps you to understand relationships and trends across
your data.
AWS Database Migration Service (AWS DMS)
AWS Database Migration Service (AWS DMS) enables you to migrate relational databases, nonrelational
databases, and other types of data stores.
With AWS DMS, you move data between a source database and a target database. The source and
target databases can be of the same type or different types. During the migration, your source database
remains operational, reducing downtime for any applications that rely on the database.
For example, suppose that you have a MySQL database that is stored on premises in an Amazon EC2
instance or in Amazon RDS. Consider the MySQL database to be your source database. Using AWS DMS,
you could migrate your data to a target database, such as an Amazon Aurora database.
• Development and test database migrations
• Database consolidation
• Continuous replication
Additional database services
• Amazon DocumentDB
• Amazon Neptune
• Amazon Quantum Ledger Database (Amazon QLDB)
• Amazon Managed Blockchain
• Amazon ElastiCache
• Amazon DynamoDB Accelerator
AWS Organizations are covered
Compliance
AWS Artifact
Depending on your company’s industry, you may need to uphold specific standards. An audit or
inspection will ensure that the company has met those standards. AWS Artifact is a service that provides
on-demand access to AWS security and compliance reports and select online agreements. AWS Artifact
consists of two main sections: AWS Artifact Agreements and AWS Artifact Reports.
AWS Artifact Agreements
Suppose that your company needs to sign an agreement with AWS regarding your use of certain types of
information throughout AWS services. You can do this through AWS Artifact Agreements.
In AWS Artifact Agreements, you can review, accept, and manage agreements for an individual account
and for all your accounts in AWS Organizations. Different types of agreements are offered to address the
needs of customers who are subject to specific regulations, such as the Health Insurance Portability and
Accountability Act (HIPAA).
AWS Artifact Reports
Next, suppose that a member of your company’s development team is building an application and needs
more information about their responsibility for complying with certain regulatory standards. You can
advise them to access this information in AWS Artifact Reports.
AWS Artifact Reports provide compliance reports from third-party auditors. These auditors have tested
and verified that AWS is compliant with a variety of global, regional, and industry-specific security
standards and regulations. AWS Artifact Reports remains up to date with the latest reports released. You
can provide the AWS audit artifacts to your auditors or regulators as evidence of AWS security controls.
Customer Compliance Center
The Customer Compliance Center contains resources to help you learn more about AWS compliance. In
the Customer Compliance Center, you can read customer compliance stories to discover how companies
in regulated industries have solved various compliance, governance, and audit challenges. You can also
access compliance whitepapers and documentation on topics such as:
• AWS answers to key compliance questions
• An overview of AWS risk and compliance
• An auditing security checklist
Additionally, the Customer Compliance Center includes an auditor learning path. This learning path is
designed for individuals in auditing, compliance, and legal roles who want to learn more about how their
internal operations can demonstrate compliance using the AWS Cloud.
CloudWatch and CloudTrail
CloudTrail Insights - Within CloudTrail, you can also enable CloudTrail Insights. This optional feature
allows CloudTrail to automatically detect unusual API activities in your AWS account.
For example, CloudTrail Insights might detect that a higher number of Amazon EC2 instances than usual
have recently launched in your account. You can then review the full event details to determine which
actions you need to take next.
Consolidated billing
In an earlier module, you learned about AWS Organizations, a service that enables you to manage
multiple AWS accounts from a central location. AWS Organizations also provides the option for
consolidated billing. The consolidated billing feature of AWS Organizations enables you to receive a
single bill for all AWS accounts in your organization. By consolidating, you can easily track the combined
costs of all the linked accounts in your organization. The default maximum number of accounts allowed
for an organization is 4, but you can contact AWS Support to increase your quota, if needed.
Another benefit of consolidated billing is the ability to share bulk discount pricing, Savings Plans, and
Reserved Instances across the accounts in your organization. For instance, one account might not have
enough monthly usage to qualify for discount pricing. However, when multiple accounts are combined,
their aggregated usage may result in a benefit that applies across all accounts in the organization.
AWS Cloud Adoption Framework (AWS CAF)
At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into six areas of
focus, called Perspectives. Each Perspective addresses distinct responsibilities. The planning process
helps the right people across the organization prepare for the changes ahead.
In general, the Business, People, and Governance Perspectives focus on business capabilities, whereas
the Platform, Security, and Operations Perspectives focus on technical capabilities.
6 strategies for migration
When migrating applications to the cloud, six of the most common migration strategies that you can
implement are:
• Rehosting - also known as “lift-and-shift” involves moving applications without changes. In the
scenario of a large legacy migration, in which the company is looking to implement its migration
and scale quickly to meet a business case, the majority of applications are rehosted.
• Replatforming - also known as “lift, tinker, and shift,” involves making a few cloud optimizations
to realize a tangible benefit. Optimization is achieved without changing the core architecture of
the application.
• Refactoring/re-architecting - (also known as re-architecting) involves reimagining how an
application is architected and developed by using cloud-native features. Refactoring is driven by
a strong business need to add features, scale, or performance that would otherwise be difficult
to achieve in the application’s existing environment.
• Repurchasing - involves moving from a traditional license to a software-as-a-service model. For
example, a business might choose to implement the repurchasing strategy by migrating from a
customer relationship management (CRM) system to Salesforce.com.
• Retaining - consists of keeping applications that are critical for the business in the source
environment. This might include applications that require major refactoring before they can be
migrated, or, work that can be postponed until a later time.
• Retiring - the process of removing applications that are no longer needed.
Snow family
Snowmobile moves up to 100 PB on 45-foot-long shipping container
Snowball data migrates/edge computes 40 vCPUs with 80 TB
Snowcone is smallest; portable and rugged
Serverless applications
With AWS, serverless refers to applications that don’t require you to provision, maintain, or administer
servers. You don’t need to worry about fault tolerance or availability. AWS handles these capabilities for
you.
AWS Lambda is an example of a service that you can use to run serverless applications. If you design
your architecture to trigger Lambda functions to run your code, you can bypass the need to manage a
fleet of servers. Building your architecture with serverless applications enables your developers to focus
on their core product instead of managing and operating servers.
Artificial intelligence
AWS offers a variety of services powered by artificial intelligence (AI). For example, you can perform the
following tasks:
• Convert speech to text with Amazon Transcribe.
• Discover patterns in text with Amazon Comprehend.
• Identify potentially fraudulent online activities with Amazon Fraud Detector.
• Build voice and text chatbots with Amazon Lex.
Machine learning
Traditional machine learning (ML) development is complex, expensive, time consuming, and error
prone. AWS offers Amazon SageMaker to remove the difficult work from the process and empower you
to build, train, and deploy ML models quickly. You can use ML to analyze data, solve complex problems,
and predict outcomes before they happen.