Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
20 views87 pages

New SAA Practice Set 1

The document outlines a practice set for AWS security and architecture, detailing various scenarios and recommended solutions for managing AWS accounts, IAM roles, data storage, and application performance. Key recommendations include enabling Multi-Factor Authentication for AWS root users, using IAM roles for cross-account access, and implementing AWS Global Accelerator for improved application performance. The document also emphasizes the importance of AWS KMS key management and the use of AWS services like Kinesis Data Firehose for data ingestion with minimal infrastructure maintenance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views87 pages

New SAA Practice Set 1

The document outlines a practice set for AWS security and architecture, detailing various scenarios and recommended solutions for managing AWS accounts, IAM roles, data storage, and application performance. Key recommendations include enabling Multi-Factor Authentication for AWS root users, using IAM roles for cross-account access, and implementing AWS Global Accelerator for improved application performance. The document also emphasizes the importance of AWS KMS key management and the use of AWS services like Kinesis Data Firehose for data ingestion with minimal infrastructure maintenance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

New SAA Practice Set 1 Total points 47/65

The respondent's email ([email protected]) was recorded on submission of


this form.

Name *

Gokul Upadhyay Guragain

Email *

[email protected]
1. An IT consultant is helping the owner of a medium-sized business set *1/1
up an AWS account. What are the security recommendations he must
follow while creating the AWS account root user? (Select two)

Create a strong password for the AWS account root user

Encrypt the access keys and save them on Amazon S3

Enable Multi Factor Authentication (MFA) for the AWS account root user
account

Send an email to the business owner with details of the login username and
password for the AWS root user. This will help the business owner to troubleshoot
any login issues in future

Create AWS account root user access keys and share those keys only with the
business owner

Feedback

Create a strong password for the AWS account root user

Enable Multi Factor Authentication (MFA) for the AWS account root user account

Here are some of the best practices while creating an AWS account root user:

1) Use a strong password to help protect account-level access to the AWS Management
Console. 2) Never share your AWS account root user password or access keys with
anyone. 3) If you do have an access key for your AWS account root user, delete it. If you
must keep it, rotate (change) the access key regularly. You should not encrypt the access
keys and save them on Amazon S3. 4) If you don't already have an access key for your
AWS account root user, don't create one unless you absolutely need to. 5) Enable AWS
multi-factor authentication (MFA) on your AWS account root user account.
2. An organization wants to delegate access to a set of users from the *1/1
development environment so that they can access some resources in the
production environment which is managed under another AWS account.

As a solutions architect, which of the following steps would you


recommend?

Create a new IAM role with the required permissions to access the resources in
the production environment. The users can then assume this IAM role while
accessing the resources from the production environment

It is not possible to access cross-account resources

Create new IAM user credentials for the production environment and share these
credentials with the set of users from the development environment

Both IAM roles and IAM users can be used interchangeably for cross-account
access

Feedback

Create a new IAM role with the required permissions to access the resources in the
production environment. The users can then assume this IAM role while accessing the
resources from the production environment

IAM roles allow you to delegate access to users or services that normally don't have
access to your organization's AWS resources. IAM users or AWS services can assume a
role to obtain temporary security credentials that can be used to make AWS API calls.
Consequently, you don't have to share long-term credentials for access to a resource.
Using IAM roles, it is possible to access cross-account resources.
3. The engineering team at an in-home fitness company is evaluating *0/1
multiple in-memory data stores with the ability to power its on-demand,
live leaderboard. The company's leaderboard requires high availability,
low latency, and real-time processing to deliver customizable user data
for the community of users working out together virtually from the
comfort of their home.

As a solutions architect, which of the following solutions would you


recommend? (Select two)

Power the on-demand, live leaderboard using Amazon DynamoDB as it meets the
in-memory, high availability, low latency requirements

Power the on-demand, live leaderboard using Amazon DynamoDB with


DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low
latency requirements

Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it
meets the in-memory, high availability, low latency requirements

Power the on-demand, live leaderboard using Amazon RDS for Aurora as it
meets the in-memory, high availability, low latency requirements

Power the on-demand, live leaderboard using Amazon Neptune as it meets the in-
memory, high availability, low latency requirements

Correct answer

Power the on-demand, live leaderboard using Amazon DynamoDB with DynamoDB
Accelerator (DAX) as it meets the in-memory, high availability, low latency
requirements

Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it
meets the in-memory, high availability, low latency requirements

Feedback

Power the on-demand, live leaderboard using Amazon Neptune as it meets the in-memory,
high availability, low latency requirements - Amazon Neptune is a fast, reliable, fully-
managed graph database service that makes it easy to build and run applications that
work with highly connected datasets. Neptune is not an in-memory database, so this
option is not correct.

Power the on-demand, live leaderboard using Amazon DynamoDB as it meets the in-
memory, high availability, low latency requirements - DynamoDB is not an in-memory
database, so this option is not correct.
Power the on-demand, live leaderboard using Amazon RDS for Aurora as it meets the in-
memory, high availability, low latency requirements - Amazon Aurora is a MySQL and
PostgreSQL-compatible relational database built for the cloud, that combines the
performance and availability of traditional enterprise databases with the simplicity and
cost-effectiveness of open source databases. Amazon Aurora features a distributed, fault-
tolerant, self-healing storage system that auto-scales up to 128TB per database instance.
Aurora is not an in-memory database, so this option is not correct.

References:

https://aws.amazon.com/elasticache/

https://aws.amazon.com/elasticache/redis/

https://aws.amazon.com/dynamodb/dax/
4. A social photo-sharing company uses Amazon Simple Storage Service *1/1
(Amazon S3) to store the images uploaded by the users. These images
are kept encrypted in Amazon S3 by using AWS Key Management Service
(AWS KMS) and the company manages its own AWS KMS keys for
encryption. A member of the DevOps team accidentally deleted the AWS
KMS key a day ago, thereby rendering the user's photo data
unrecoverable. You have been contacted by the company to consult them
on possible solutions to this crisis.

As a solutions architect, which of the following steps would you


recommend to solve this issue?

Contact AWS support to retrieve the AWS KMS key from their backup

The AWS KMS key can be recovered by the AWS root account user

The company should issue a notification on its web application informing the users
about the loss of their data

As the AWS KMS key was deleted a day ago, it must be in the 'pending deletion'
status and hence you can just cancel the KMS key deletion and recover the key

Feedback

As the AWS KMS key was deleted a day ago, it must be in the 'pending deletion' status and
hence you can just cancel the KMS key deletion and recover the key

AWS Key Management Service (KMS) makes it easy for you to create and manage
cryptographic keys and control their use across a wide range of AWS services and in your
applications. AWS KMS is a secure and resilient service that uses hardware security
modules that have been validated under FIPS 140-2.

Deleting an AWS KMS key in AWS Key Management Service (AWS KMS) is destructive and
potentially dangerous. Therefore, AWS KMS enforces a waiting period. To delete a KMS
key in AWS KMS you schedule key deletion. You can set the waiting period from a
minimum of 7 days up to a maximum of 30 days. The default waiting period is 30 days.
During the waiting period, the KMS key status and key state is Pending deletion. To recover
the KMS key, you can cancel key deletion before the waiting period ends. After the waiting
period ends you cannot cancel key deletion, and AWS KMS deletes the KMS key.
5. A media company runs a photo-sharing web application that is *1/1
accessed across three different countries. The application is deployed on
several Amazon Elastic Compute Cloud (Amazon EC2) instances running
behind an Application Load Balancer. With new government regulations,
the company has been asked to block access from two countries and
allow access only from the home country of the company.

Which configuration should be used to meet this changed requirement?

Configure the security group for the Amazon EC2 instances

Use Geo Restriction feature of Amazon CloudFront in a Amazon Virtual Private


Cloud (Amazon VPC)

Configure AWS Web Application Firewall (AWS WAF) on the Application Load
Balancer in a Amazon Virtual Private Cloud (Amazon VPC)

Configure the security group on the Application Load Balancer

Feedback

Configure AWS Web Application Firewall (AWS WAF) on the Application Load Balancer in a
Amazon Virtual Private Cloud (Amazon VPC)

AWS Web Application Firewall (AWS WAF) is a web application firewall service that lets
you monitor web requests and protect your web applications from malicious requests. Use
AWS WAF to block or allow requests based on conditions that you specify, such as the IP
addresses. You can also use AWS WAF preconfigured protections to block common
attacks like SQL injection or cross-site scripting.

You can use AWS WAF with your Application Load Balancer to allow or block requests
based on the rules in a web access control list (web ACL). Geographic (Geo) Match
Conditions in AWS WAF allows you to use AWS WAF to restrict application access based
on the geographic location of your viewers. With geo match conditions you can choose the
countries from which AWS WAF should allow access.
6. The engineering team at a Spanish professional football club has built *0/1
a notification system for its website using Amazon Simple Notification
Service (Amazon SNS) notifications which are then handled by an AWS
Lambda function for end-user delivery. During the off-season, the
notification systems need to handle about 100 requests per second.
During the peak football season, the rate touches about 5000 requests
per second and it is noticed that a significant number of the notifications
are not being delivered to the end-users on the website.

As a solutions architect, which of the following would you suggest as the


BEST possible solution to this issue?

Amazon SNS message deliveries to AWS Lambda have crossed the account
concurrency quota for AWS Lambda, so the team needs to contact AWS support to
raise the account limit

The engineering team needs to provision more servers running the Amazon SNS
service

Amazon SNS has hit a scalability limit, so the team needs to contact AWS support
to raise the account limit

The engineering team needs to provision more servers running the AWS
Lambda service

Correct answer

Amazon SNS message deliveries to AWS Lambda have crossed the account
concurrency quota for AWS Lambda, so the team needs to contact AWS support to
raise the account limit

Feedback

Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise
the account limit - Amazon SNS leverages the proven AWS cloud to dynamically scale with
your application. You don't need to contact AWS support, as SNS is a fully managed
service, taking care of the heavy lifting related to capacity planning, provisioning,
monitoring, and patching. Therefore, this option is incorrect.

The engineering team needs to provision more servers running the Amazon SNS service

The engineering team needs to provision more servers running the AWS Lambda service

As both AWS Lambda and Amazon SNS are serverless and fully managed services, the
engineering team cannot provision more servers. Both of these options are incorrect.

References:
https://aws.amazon.com/sns/

https://aws.amazon.com/sns/faqs/

7. A gaming company is looking at improving the availability and *1/1


performance of its global flagship application which utilizes User
Datagram Protocol and needs to support fast regional failover in case an
AWS Region goes down. The company wants to continue using its own
custom Domain Name System (DNS) service.

Which of the following AWS services represents the best solution for this
use-case?

AWS Elastic Load Balancing (ELB)

Amazon Route 53

AWS Global Accelerator

Amazon CloudFront

Feedback

AWS Global Accelerator

AWS Global Accelerator utilizes the Amazon global network, allowing you to improve the
performance of your applications by lowering first-byte latency (the round trip time for a
packet to go from a client to your endpoint and back again) and jitter (the variation of
latency), and increasing throughput (the amount of time it takes to transfer data) as
compared to the public internet.

AWS Global Accelerator improves performance for a wide range of applications over TCP
or UDP by proxying packets at the edge to applications running in one or more AWS
Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP),
IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static
IP addresses or deterministic, fast regional failover.
8. A geological research agency maintains the seismological data for the *1/1
last 100 years. The data has a velocity of 1GB per minute. You would like
to store the data with only the most relevant attributes to build a
predictive model for earthquakes.

What AWS services would you use to build the most cost-effective
solution with the LEAST amount of infrastructure maintenance?

Ingest the data in Amazon Kinesis Data Streams and use an intermediary AWS
Lambda function to filter and transform the incoming stream before the output is
dumped on Amazon S3

Ingest the data in Amazon Kinesis Data Analytics and use SQL queries to filter and
transform the data before writing to Amazon S3

Ingest the data in a Spark Streaming Cluster on Amazon EMR and use Spark
Streaming transformations before writing to Amazon S3

Ingest the data in Amazon Kinesis Data Firehose and use an intermediary AWS
Lambda function to filter and transform the incoming stream before the output
is dumped on Amazon S3

Feedback

Ingest the data in Amazon Kinesis Data Firehose and use an intermediary AWS Lambda
function to filter and transform the incoming stream before the output is dumped on
Amazon S3

Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores
and analytics tools. It can capture, transform, and load streaming data into Amazon S3,
Amazon Redshift, Amazon OpenSearch Service, and Splunk, enabling near real-time
analytics with existing business intelligence tools and dashboards you’re already using
today. It is a fully managed service that automatically scales to match the throughput of
your data and requires no ongoing administration. It can also batch, compress, and
encrypt the data before loading it, minimizing the amount of storage used at the
destination and increasing security.

The correct option is to ingest the data in Amazon Kinesis Data Firehose and use a AWS
Lambda function to filter and transform the incoming data before the output is dumped on
Amazon S3. This way you only need to store a sliced version of the data with only the
relevant data attributes required for your model. Also it should be noted that this solution
is entirely serverless and requires no infrastructure maintenance.
9. Amazon CloudFront offers a multi-tier cache in the form of regional *1/1
edge caches that improve latency. However, there are certain content
types that bypass the regional edge cache, and go directly to the origin.

Which of the following content types skip the regional edge cache?
(Select two)

Dynamic content, as determined at request time (cache-behavior configured to


forward all headers)

E-commerce assets such as product photos

Static content such as style sheets, JavaScript files

User-generated videos

Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin

Feedback

Dynamic content, as determined at request time (cache-behavior configured to forward all


headers)

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds, all within a developer-friendly environment.

CloudFront points of presence (POPs) (edge locations) make sure that popular content
can be served quickly to your viewers. CloudFront also has regional edge caches that bring
more of your content closer to your viewers, even when the content is not popular enough
to stay at a POP, to help improve performance for that content.

Dynamic content, as determined at request time (cache-behavior configured to forward all


headers), does not flow through regional edge caches, but goes directly to the origin. So
this option is correct.

Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin

Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin from the


POPs and do not proxy through the regional edge caches. So this option is also correct.
10. The development team at an e-commerce startup has set up multiple *1/1
microservices running on Amazon EC2 instances under an Application
Load Balancer. The team wants to route traffic to multiple back-end
services based on the URL path of the HTTP header. So it wants requests
for https://www.example.com/orders to go to a specific microservice and
requests for https://www.example.com/products to go to another
microservice.

Which of the following features of Application Load Balancers can be


used for this use-case?

Query string parameter-based routing

Host-based Routing

HTTP header-based routing

Path-based Routing

Feedback

Path-based Routing

Elastic Load Balancing automatically distributes incoming application traffic across


multiple targets, such as Amazon EC2 instances, containers, IP addresses, and AWS
Lambda functions.

If your application is composed of several individual services, an Application Load


Balancer can route a request to a service based on the content of the request. Here are the
different types -

Host-based Routing:

You can route a client request based on the Host field of the HTTP header allowing you to
route to multiple domains from the same load balancer.

Path-based Routing:

You can route a client request based on the URL path of the HTTP header.

HTTP header-based routing:

You can route a client request based on the value of any standard or custom HTTP header.

HTTP method-based routing:

You can route a client request based on any standard or custom HTTP method.
Query string parameter-based routing:

You can route a client request based on the query string or query parameters.

Source IP address CIDR-based routing:

You can route a client request based on source IP address CIDR from where the request
originates.

Path-based Routing Overview:

You can use path conditions to define rules that route requests based on the URL in the
request (also known as path-based routing).

The path pattern is applied only to the path of the URL, not to its query parameters.
11. The engineering team at a data analytics company has observed that *1/1
its flagship application functions at its peak performance when the
underlying Amazon Elastic Compute Cloud (Amazon EC2) instances have
a CPU utilization of about 50%. The application is built on a fleet of
Amazon EC2 instances managed under an Auto Scaling group. The
workflow requests are handled by an internal Application Load Balancer
that routes the requests to the instances.

As a solutions architect, what would you recommend so that the


application runs near its peak performance state?

Configure the Auto Scaling group to use a Amazon Cloudwatch alarm triggered on
a CPU utilization threshold of 50%

Configure the Auto Scaling group to use simple scaling policy and set the CPU
utilization as the target metric with a target value of 50%

Configure the Auto Scaling group to use target tracking policy and set the CPU
utilization as the target metric with a target value of 50%

Configure the Auto Scaling group to use step scaling policy and set the CPU
utilization as the target metric with a target value of 50%

Feedback

Configure the Auto Scaling group to use target tracking policy and set the CPU utilization
as the target metric with a target value of 50%

An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as
a logical grouping for the purposes of automatic scaling and management. An Auto
Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health
check replacements and scaling policies.

With target tracking scaling policies, you select a scaling metric and set a target value.
Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the
scaling policy and calculates the scaling adjustment based on the metric and the target
value. The scaling policy adds or removes capacity as required to keep the metric at, or
close to, the specified target value.

For example, you can use target tracking scaling to:

Configure a target tracking scaling policy to keep the average aggregate CPU utilization of
your Auto Scaling group at 50 percent. This meets the requirements specified in the given
use-case and therefore, this is the correct option.
12. An e-commerce company is looking for a solution with high *1/1
availability, as it plans to migrate its flagship application to a fleet of
Amazon Elastic Compute Cloud (Amazon EC2) instances. The solution
should allow for content-based routing as part of the architecture.

As a Solutions Architect, which of the following will you suggest for the
company?

Use a Network Load Balancer for distributing traffic to the Amazon EC2 instances
spread across different Availability Zones (AZs). Configure a Private IP address to
mask any failure of an instance

Use an Application Load Balancer for distributing traffic to the Amazon EC2
instances spread across different Availability Zones (AZs). Configure Auto
Scaling group to mask any failure of an instance

Use an Auto Scaling group for distributing traffic to the Amazon EC2 instances
spread across different Availability Zones (AZs). Configure a Public IP address to
mask any failure of an instance

Use an Auto Scaling group for distributing traffic to the Amazon EC2 instances
spread across different Availability Zones (AZs). Configure an elastic IP address
(EIP) to mask any failure of an instance

Feedback

Use an Application Load Balancer for distributing traffic to the Amazon EC2 instances
spread across different Availability Zones (AZs). Configure Auto Scaling group to mask
any failure of an instance

The Application Load Balancer (ALB) is best suited for load balancing HTTP and HTTPS
traffic and provides advanced request routing targeted at the delivery of modern
application architectures, including microservices and containers. Operating at the
individual request level (Layer 7), the Application Load Balancer routes traffic to targets
within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.

This is the correct option since the question has a specific requirement for content-based
routing which can be configured via the Application Load Balancer. Different Availability
Zones (AZs) provide high availability to the overall architecture and Auto Scaling group will
help mask any instance failures.
13. A company uses Amazon S3 buckets for storing sensitive customer *0/1
data. The company has defined different retention periods for different
objects present in the Amazon S3 buckets, based on the compliance
requirements. But, the retention rules do not seem to work as expected.

Which of the following options represent a valid configuration for setting


up retention periods for objects in Amazon S3 buckets? (Select two)

Different versions of a single object can have different retention modes and periods

When you apply a retention period to an object version explicitly, you specify a
Retain Until Date for the object version

The bucket default settings will override any explicit retention mode or period you
request on an object version

You cannot place a retention period on an object version through a bucket default
setting

When you use bucket default settings, you specify a Retain Until Date for the
object version

Correct answer

Different versions of a single object can have different retention modes and periods

When you apply a retention period to an object version explicitly, you specify a
Retain Until Date for the object version

Feedback

You cannot place a retention period on an object version through a bucket default setting -
You can place a retention period on an object version either explicitly or through a bucket
default setting.

When you use bucket default settings, you specify a Retain Until Date for the object
version - When you use bucket default settings, you don't specify a Retain Until Date.
Instead, you specify a duration, in either days or years, for which every object version
placed in the bucket should be protected.

The bucket default settings will override any explicit retention mode or period you request
on an object version - If your request to place an object version in a bucket contains an
explicit retention mode and period, those settings override any bucket default settings for
that object version.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html
14. A financial services company uses Amazon GuardDuty for analyzing *0/1
its AWS account metadata to meet the compliance guidelines. However,
the company has now decided to stop using Amazon GuardDuty service.
All the existing findings have to be deleted and cannot persist anywhere
on AWS Cloud.

Which of the following techniques will help the company meet this
requirement?

Raise a service request with Amazon to completely delete the data from all their
backups

Disable the service in the general settings

De-register the service under services tab

Suspend the service in the general settings

Correct answer

Disable the service in the general settings

Feedback

Suspend the service in the general settings - You can stop Amazon GuardDuty from
analyzing your data sources at any time by choosing to suspend the service in the general
settings. This will immediately stop the service from analyzing data, but does not delete
your existing findings or configurations.

De-register the service under services tab - This is a made-up option, used only as a
distractor.

Raise a service request with Amazon to completely delete the data from all their backups -
There is no need to create a service request as you can delete the existing findings by
disabling the service.

Reference:

https://aws.amazon.com/guardduty/faqs/
15. A healthcare company uses its on-premises infrastructure to run *1/1
legacy applications that require specialized customizations to the
underlying Oracle database as well as its host operating system (OS).
The company also wants to improve the availability of the Oracle
database layer. The company has hired you as an AWS Certified Solutions
Architect – Associate to build a solution on AWS that meets these
requirements while minimizing the underlying infrastructure maintenance
effort.

Which of the following options represents the best solution for this use
case?

Leverage cross AZ read-replica configuration of Amazon RDS for Oracle that allows
the Database Administrator (DBA) to access and customize the database
environment and the underlying operating system

Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows
the Database Administrator (DBA) to access and customize the database
environment and the underlying operating system

Leverage multi-AZ configuration of Amazon RDS for Oracle that allows the
Database Administrator (DBA) to access and customize the database environment
and the underlying operating system

Deploy the Oracle database layer on multiple Amazon EC2 instances spread across
two Availability Zones (AZs). This deployment configuration guarantees high
availability and also allows the Database Administrator (DBA) to access and
customize the database environment and the underlying operating system

Feedback

Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the
Database Administrator (DBA) to access and customize the database environment and
the underlying operating system

Amazon RDS is a managed service that makes it easy to set up, operate, and scale a
relational database in the cloud. It provides cost-efficient and resizable capacity while
managing time-consuming database administration tasks. Amazon RDS can automatically
back up your database and keep your database software up to date with the latest version.
However, RDS does not allow you to access the host OS of the database.

For the given use-case, you need to use Amazon RDS Custom for Oracle as it allows you to
access and customize your database server host and operating system, for example by
applying special patches and changing the database software settings to support third-
party applications that require privileged access. Amazon RDS Custom for Oracle
facilitates these functionalities with minimum infrastructure maintenance effort. You need
to set up the RDS Custom for Oracle in multi-AZ configuration for high availability.
16. An IT company wants to review its security best-practices after an *0/1
incident was reported where a new developer on the team was assigned
full access to Amazon DynamoDB. The developer accidentally deleted a
couple of tables from the production environment while building out a
new feature.

Which is the MOST effective way to address this issue so that such
incidents do not recur?

Use permissions boundary to control the maximum permissions employees can


grant to the IAM principals

Remove full database access for all IAM users in the organization

Only root user should have full database access in the organization

The CTO should review the permissions for each new developer's IAM user so
that such incidents don't recur

Correct answer

Use permissions boundary to control the maximum permissions employees can


grant to the IAM principals

Feedback

Remove full database access for all IAM users in the organization - It is not practical to
remove full access for all IAM users in the organization because a select set of users need
this access for database administration. So this option is not correct.

The CTO should review the permissions for each new developer's IAM user so that such
incidents don't recur - Likewise the CTO is not expected to review the permissions for each
new developer's IAM user, as this is best done via an automated procedure. This option
has been added as a distractor.

Only root user should have full database access in the organization - As a best practice,
the root user should not access the AWS account to carry out any administrative
procedures. So this option is not correct.

Reference:

https://aws.amazon.com/blogs/security/delegate-permission-management-to-
developers-using-iam-permissions-boundaries/
17. An ivy-league university is assisting NASA to find potential landing *1/1
sites for exploration vehicles of unmanned missions to our neighboring
planets. The university uses High Performance Computing (HPC) driven
application architecture to identify these landing sites.

Which of the following Amazon EC2 instance topologies should this


application be deployed on?

The Amazon EC2 instances should be deployed in a cluster placement group so


that the underlying workload can benefit from low network latency and high
network throughput

The Amazon EC2 instances should be deployed in a spread placement group so


that there are no correlated failures

The Amazon EC2 instances should be deployed in a partition placement group so


that distributed workloads can be handled effectively

The Amazon EC2 instances should be deployed in an Auto Scaling group so that
application meets high availability requirements

Feedback

The Amazon EC2 instances should be deployed in a cluster placement group so that the
underlying workload can benefit from low network latency and high network throughput

The key thing to understand in this question is that HPC workloads need to achieve low-
latency network performance necessary for tightly-coupled node-to-node communication
that is typical of HPC applications. Cluster placement groups pack instances close
together inside an Availability Zone. These are recommended for applications that benefit
from low network latency, high network throughput, or both. Therefore this option is the
correct answer.
18. A new DevOps engineer has joined a large financial services company *1/1
recently. As part of his onboarding, the IT department is conducting a
review of the checklist for tasks related to AWS Identity and Access
Management (AWS IAM).

As an AWS Certified Solutions Architect – Associate, which best


practices would you recommend (Select two)?

Grant maximum privileges to avoid assigning privileges again

Configure AWS CloudTrail to log all AWS Identity and Access Management
(AWS IAM) actions

Use user credentials to provide access specific permissions for Amazon EC2
instances

Create a minimum number of accounts and share these account credentials


among employees

Enable AWS Multi-Factor Authentication (AWS MFA) for privileged users

Feedback

Enable AWS Multi-Factor Authentication (AWS MFA) for privileged users

As per the AWS best practices, it is better to enable Multi Factor Authentication (MFA) for
privileged users via an MFA-enabled mobile device or hardware MFA token.

Configure AWS CloudTrail to log all AWS Identity and Access Management (AWS IAM)
actions

AWS recommends to turn on AWS CloudTrail to log all IAM actions for monitoring and
audit purposes.
19. The solo founder at a tech startup has just created a brand new AWS *1/1
account. The founder has provisioned an Amazon EC2 instance 1A which
is running in AWS Region A. Later, he takes a snapshot of the instance 1A
and then creates a new Amazon Machine Image (AMI) in Region A from
this snapshot. This AMI is then copied into another Region B. The founder
provisions an instance 1B in Region B using this new AMI in Region B.

At this point in time, what entities exist in Region B?

1 Amazon EC2 instance and 1 AMI exist in Region B

1 Amazon EC2 instance and 1 snapshot exist in Region B

1 Amazon EC2 instance and 2 AMIs exist in Region B

1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B

Feedback

1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B

An Amazon Machine Image (AMI) provides the information required to launch an instance.
You must specify an AMI when you launch an instance. When the new AMI is copied from
Region A into Region B, it automatically creates a snapshot in Region B because AMIs are
based on the underlying snapshots. Further, an instance is created from this AMI in Region
B. Hence, we have 1 Amazon EC2 instance, 1 AMI and 1 snapshot in Region B.
20. A Big Data analytics company wants to set up an AWS cloud *0/1
architecture that throttles requests in case of sudden traffic spikes. The
company is looking for AWS services that can be used for buffering or
throttling to handle such traffic variations.

Which of the following services can be used to support this requirement?

Elastic Load Balancer, Amazon Simple Queue Service (Amazon SQS), AWS Lambda

Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon
Kinesis

Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification


Service (Amazon SNS) and AWS Lambda

Amazon Gateway Endpoints, Amazon Simple Queue Service (Amazon SQS) and
Amazon Kinesis

Correct answer

Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon
Kinesis

Feedback

Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service
(Amazon SNS) and AWS Lambda - Amazon SQS has the ability to buffer its messages.
Amazon Simple Notification Service (SNS) cannot buffer messages and is generally used
with SQS to provide the buffering facility. When requests come in faster than your Lambda
function can scale, or when your function is at maximum concurrency, additional requests
fail as the Lambda throttles those requests with error code 429 status code. So, this
combination of services is incorrect.

Amazon Gateway Endpoints, Amazon Simple Queue Service (Amazon SQS) and Amazon
Kinesis - A Gateway Endpoint is a gateway that you specify as a target for a route in your
route table for traffic destined to a supported AWS service. This cannot help in throttling or
buffering of requests. Amazon SQS and Kinesis can buffer incoming data. Since Gateway
Endpoint is an incorrect service for throttling or buffering, this option is incorrect.

Elastic Load Balancer, Amazon Simple Queue Service (Amazon SQS), AWS Lambda -
Elastic Load Balancer cannot throttle requests. Amazon SQS can be used to buffer
messages. When requests come in faster than your Lambda function can scale, or when
your function is at maximum concurrency, additional requests fail as the Lambda throttles
those requests with error code 429 status code. So, this combination of services is
incorrect.

References:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-
throttling.html
https://aws.amazon.com/sqs/features/

21. A logistics company is building a multi-tier application to track the *1/1


location of its trucks during peak operating hours. The company wants
these data points to be accessible in real-time in its analytics platform via
a REST API. The company has hired you as an AWS Certified Solutions
Architect Associate to build a multi-tier solution to store and retrieve this
location data for analysis.

Which of the following options addresses the given use case?

Leverage Amazon QuickSight with Amazon Redshift

Leverage Amazon API Gateway with Amazon Kinesis Data Analytics

Leverage Amazon API Gateway with AWS Lambda

Leverage Amazon Athena with Amazon S3

Feedback

Leverage Amazon API Gateway with Amazon Kinesis Data Analytics

You can use Kinesis Data Analytics to transform and analyze streaming data in real-time
with Apache Flink. Kinesis Data Analytics enables you to quickly build end-to-end stream
processing applications for log analytics, clickstream analytics, Internet of Things (IoT), ad
tech, gaming, etc. The four most common use cases are streaming extract-transform-load
(ETL), continuous metric generation, responsive real-time analytics, and interactive
querying of data streams. Kinesis Data Analytics for Apache Flink applications provides
your application 50 GB of running application storage per Kinesis Processing Unit (KPU).

Amazon API Gateway is a fully managed service that allows you to publish, maintain,
monitor, and secure APIs at any scale. Amazon API Gateway offers two options to create
RESTful APIs, HTTP APIs and REST APIs, as well as an option to create WebSocket APIs.

For the given use case, you can use Amazon API Gateway to create a REST API that
handles incoming requests having location data from the trucks and sends it to the
Kinesis Data Analytics application on the back end.
22. A technology blogger wants to write a review on the comparative *0/1
pricing for various storage types available on AWS Cloud. The blogger
has created a test file of size 1 gigabytes with some random data. Next
he copies this test file into AWS S3 Standard storage class, provisions an
Amazon EBS volume (General Purpose SSD (gp2)) with 100 gigabytes of
provisioned storage and copies the test file into the Amazon EBS volume,
and lastly copies the test file into an Amazon EFS Standard Storage
filesystem. At the end of the month, he analyses the bill for costs incurred
on the respective storage types for the test file.

What is the correct order of the storage charges incurred for the test file
on these three storage types?

Cost of test file storage on Amazon EFS < Cost of test file storage on Amazon S3
Standard < Cost of test file storage on Amazon EBS

Cost of test file storage on Amazon S3 Standard < Cost of test file storage on
Amazon EFS < Cost of test file storage on Amazon EBS

Cost of test file storage on Amazon S3 Standard < Cost of test file storage on
Amazon EBS < Cost of test file storage on Amazon EFS

Cost of test file storage on Amazon EBS < Cost of test file storage on Amazon S3
Standard < Cost of test file storage on Amazon EFS

Correct answer

Cost of test file storage on Amazon S3 Standard < Cost of test file storage on
Amazon EFS < Cost of test file storage on Amazon EBS

Feedback

Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EBS
< Cost of test file storage on Amazon EFS

Cost of test file storage on Amazon EFS < Cost of test file storage on Amazon S3 Standard
< Cost of test file storage on Amazon EBS

Cost of test file storage on Amazon EBS < Cost of test file storage on Amazon S3 Standard
< Cost of test file storage on Amazon EFS

Following the computations shown earlier in the explanation, these three options are
incorrect.

References:

https://aws.amazon.com/ebs/pricing/
https://aws.amazon.com/s3/pricing/(https://aws.amazon.com/s3/pricing/)

https://aws.amazon.com/efs/pricing/

23. A retail company uses Amazon Elastic Compute Cloud (Amazon EC2) *1/1
instances, Amazon API Gateway, Amazon RDS, Elastic Load Balancer and
Amazon CloudFront services. To improve the security of these services,
the Risk Advisory group has suggested a feasibility check for using the
Amazon GuardDuty service.

Which of the following would you identify as data sources supported by


Amazon GuardDuty?

Elastic Load Balancing logs, Domain Name System (DNS) logs, AWS CloudTrail
events

VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events

Amazon CloudFront logs, Amazon API Gateway logs, AWS CloudTrail events

VPC Flow Logs, Amazon API Gateway logs, Amazon S3 access logs

Feedback

VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events

Amazon GuardDuty is a threat detection service that continuously monitors for malicious
activity and unauthorized behavior to protect your AWS accounts, workloads, and data
stored in Amazon S3. With the cloud, the collection and aggregation of account and
network activities is simplified, but it can be time-consuming for security teams to
continuously analyze event log data for potential threats. With GuardDuty, you now have an
intelligent and cost-effective option for continuous threat detection in AWS. The service
uses machine learning, anomaly detection, and integrated threat intelligence to identify
and prioritize potential threats.

Amazon GuardDuty analyzes tens of billions of events across multiple AWS data sources,
such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.

With a few clicks in the AWS Management Console, GuardDuty can be enabled with no
software or hardware to deploy or maintain. By integrating with Amazon EventBridge
Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and
straightforward to push into existing event management and workflow systems.
24. A gaming company uses Amazon Aurora as its primary database *0/1
service. The company has now deployed 5 multi-AZ read replicas to
increase the read throughput and for use as failover target. The replicas
have been assigned the following failover priority tiers and corresponding
instance sizes are given in parentheses: tier-1 (16 terabytes), tier-1 (32
terabytes), tier-10 (16 terabytes), tier-15 (16 terabytes), tier-15 (32
terabytes).

In the event of a failover, Amazon Aurora will promote which of the


following read replicas?

Tier-10 (16 terabytes)

Tier-1 (16 terabytes)

Tier-1 (32 terabytes)

Tier-15 (32 terabytes)

Correct answer

Tier-1 (32 terabytes)

Feedback

Tier-15 (32 terabytes)

Tier-1 (16 terabytes)

Tier-10 (16 terabytes)

Given the failover rules discussed earlier in the explanation, these three options are
incorrect.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHig
hAvailability.html
25. An IT security consultancy is working on a solution to protect data *0/1
stored in Amazon S3 from any malicious activity as well as check for any
vulnerabilities on Amazon EC2 instances.

As a solutions architect, which of the following solutions would you


suggest to help address the given requirement?

Use Amazon Inspector to monitor any malicious activity on data stored in Amazon
S3. Use security assessments provided by Amazon GuardDuty to check for
vulnerabilities on Amazon EC2 instances

Use Amazon Inspector to monitor any malicious activity on data stored in


Amazon S3. Use security assessments provided by Amazon Inspector to check
for vulnerabilities on Amazon EC2 instances

Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon
S3. Use security assessments provided by Amazon GuardDuty to check for
vulnerabilities on Amazon EC2 instances

Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon
S3. Use security assessments provided by Amazon Inspector to check for
vulnerabilities on Amazon EC2 instances

Correct answer

Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon
S3. Use security assessments provided by Amazon Inspector to check for
vulnerabilities on Amazon EC2 instances

Feedback

Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon S3.
Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on
Amazon EC2 instances

Use Amazon Inspector to monitor any malicious activity on data stored in Amazon S3. Use
security assessments provided by Amazon Inspector to check for vulnerabilities on
Amazon EC2 instances

Use Amazon Inspector to monitor any malicious activity on data stored in Amazon S3. Use
security assessments provided by Amazon GuardDuty to check for vulnerabilities on
Amazon EC2 instances

These three options contradict the explanation provided above, so these options are
incorrect.

References:

https://aws.amazon.com/guardduty/
https://aws.amazon.com/inspector/
26. A media agency stores its re-creatable assets on Amazon Simple *0/1
Storage Service (Amazon S3) buckets. The assets are accessed by a
large number of users for the first few days and the frequency of access
falls down drastically after a week. Although the assets would be
accessed occasionally after the first week, but they must continue to be
immediately accessible when required. The cost of maintaining all the
assets on Amazon S3 storage is turning out to be very expensive and the
agency is looking at reducing costs as much as possible.

As an AWS Certified Solutions Architect – Associate, can you suggest a


way to lower the storage costs while fulfilling the business requirements?

Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-


Infrequent Access (S3 One Zone-IA) after 7 days

Configure a lifecycle policy to transition the objects to Amazon S3 Standard-


Infrequent Access (S3 Standard-IA) after 30 days

Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-


Infrequent Access (S3 One Zone-IA) after 30 days

Configure a lifecycle policy to transition the objects to Amazon S3 Standard-


Infrequent Access (S3 Standard-IA) after 7 days

Correct answer

Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-


Infrequent Access (S3 One Zone-IA) after 30 days

Feedback

Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent


Access (S3 Standard-IA) after 7 days

Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent


Access (S3 One Zone-IA) after 7 days

As mentioned earlier, the minimum storage duration is 30 days before you can transition
objects from Amazon S3 Standard to Amazon S3 One Zone-IA or Amazon S3 Standard-IA,
so both these options are added as distractors.

Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent


Access (S3 Standard-IA) after 30 days - Amazon S3 Standard-IA is for data that is
accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers
the high durability, high throughput, and low latency of Amazon S3 Standard, with a low per
GB storage price and per GB retrieval fee. This combination of low cost and high
performance makes Amazon S3 Standard-IA ideal for long-term storage, backups, and as
a data store for disaster recovery files. But, it costs more than Amazon S3 One Zone-IA
because of the redundant storage across Availability Zones (AZs). As the data is re-
creatable, so you don't need to incur this additional cost.

References:

https://aws.amazon.com/s3/storage-classes/

https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-
considerations.html
27. A research group runs its flagship application on a fleet of Amazon *0/1
EC2 instances for a specialized task that must deliver high random I/O
performance. Each instance in the fleet would have access to a dataset
that is replicated across the instances by the application itself. Because
of the resilient application architecture, the specialized task would
continue to be processed even if any instance goes down, as the
underlying application would ensure the replacement instance has
access to the required dataset.
Which of the following options is the MOST cost-optimal and resource-
efficient solution to build this fleet of Amazon EC2 instances?

Use Amazon EC2 instances with access to Amazon S3 based storage

Use Amazon EC2 instances with Amazon EFS mount points

Use Amazon Elastic Block Store (Amazon EBS) based EC2 instances

Use Instance Store based Amazon EC2 instances

Correct answer

Use Instance Store based Amazon EC2 instances

Feedback

Use Amazon Elastic Block Store (Amazon EBS) based EC2 instances - Amazon Elastic
Block Store (Amazon EBS) based volumes would need to use provisioned IOPS (io1) as
the storage type and that would incur additional costs. As we are looking for the most
cost-optimal solution, this option is ruled out.

Use Amazon EC2 instances with Amazon EFS mount points - Using Amazon Elastic File
System (Amazon EFS) implies that extra resources would have to be provisioned
(compared to using instance store where the storage is located on disks that are
physically attached to the host instance itself). As we are looking for the most resource-
efficient solution, this option is also ruled out.

Use Amazon EC2 instances with access to Amazon S3 based storage - Using Amazon EC2
instances with access to Amazon S3 based storage does not deliver high random I/O
performance, this option is just added as a distractor.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
28. A news network uses Amazon Simple Storage Service (Amazon S3) *1/1
to aggregate the raw video footage from its reporting teams across the
US. The news network has recently expanded into new geographies in
Europe and Asia. The technical teams at the overseas branch offices
have reported huge delays in uploading large video files to the destination
Amazon S3 bucket.

Which of the following are the MOST cost-effective options to improve


the file upload speed into Amazon S3 (Select two)

Use multipart uploads for faster file uploads into the destination Amazon S3
bucket

Create multiple AWS Site-to-Site VPN connections between the AWS Cloud and
branch offices in Europe and Asia. Use these VPN connections for faster file
uploads into Amazon S3

Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file


uploads into the destination S3 bucket

Use AWS Global Accelerator for faster file uploads into the destination Amazon S3
bucket

Create multiple AWS Direct Connect connections between the AWS Cloud and
branch offices in Europe and Asia. Use the direct connect connections for faster file
uploads into Amazon S3

Feedback

Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file uploads into
the destination S3 bucket

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over
long distances between your client and an S3 bucket. Amazon S3TA takes advantage of
Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge
location, data is routed to Amazon S3 over an optimized network path.

Use multipart uploads for faster file uploads into the destination Amazon S3 bucket

Multipart upload allows you to upload a single object as a set of parts. Each part is a
contiguous portion of the object's data. You can upload these object parts independently
and in any order. If transmission of any part fails, you can retransmit that part without
affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles
these parts and creates the object. In general, when your object size reaches 100 MB, you
should consider using multipart uploads instead of uploading the object in a single
operation. Multipart upload provides improved throughput, therefore it facilitates faster file
uploads.
29. A leading video streaming service delivers billions of hours of content *1/1
from Amazon Simple Storage Service (Amazon S3) to customers around
the world. Amazon S3 also serves as the data lake for its big data
analytics solution. The data lake has a staging zone where intermediary
query results are kept only for 24 hours. These results are also heavily
referenced by other parts of the analytics pipeline.

Which of the following is the MOST cost-effective strategy for storing this
intermediary query data?

Store the intermediary query results in Amazon S3 Standard storage class

Store the intermediary query results in Amazon S3 Glacier Instant Retrieval storage
class

Store the intermediary query results in Amazon S3 One Zone-Infrequent Access


storage class

Store the intermediary query results in Amazon S3 Standard-Infrequent Access


storage class

Feedback

Store the intermediary query results in Amazon S3 Standard storage class

Amazon S3 Standard offers high durability, availability, and performance object storage for
frequently accessed data. Because it delivers low latency and high throughput, S3
Standard is appropriate for a wide variety of use cases, including cloud applications,
dynamic websites, content distribution, mobile and gaming applications, and big data
analytics. As there is no minimum storage duration charge and no retrieval fee (remember
that intermediary query results are heavily referenced by other parts of the analytics
pipeline), this is the MOST cost-effective storage class amongst the given options.
30. A file-hosting service uses Amazon Simple Storage Service (Amazon *1/1
S3) under the hood to power its storage offerings. Currently all the
customer files are uploaded directly under a single Amazon S3 bucket.
The engineering team has started seeing scalability issues where
customer file uploads have started failing during the peak access hours
with more than 5000 requests per second.

Which of the following is the MOST resource efficient and cost-optimal


way of addressing this issue?

Change the application architecture to create a new Amazon S3 bucket for each
customer and then upload each customer's files directly under the respective
buckets

Change the application architecture to create customer-specific custom


prefixes within the single Amazon S3 bucket and then upload the daily files into
those prefixed locations

Change the application architecture to create a new Amazon S3 bucket for each
day's data and then upload the daily files directly under that day's bucket

Change the application architecture to use Amazon Elastic File System (Amazon
EFS) instead of Amazon S3 for storing the customers' uploaded files

Feedback

Change the application architecture to create customer-specific custom prefixes within the
single Amazon S3 bucket and then upload the daily files into those prefixed locations

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. Your applications
can easily achieve thousands of transactions per second in request performance when
uploading and retrieving storage from Amazon S3. Amazon S3 automatically scales to
high request rates. For example, your application can achieve at least 3,500
PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket.

There are no limits to the number of prefixes in a bucket. You can increase your read or
write performance by parallelizing reads. For example, if you create 10 prefixes in an
Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000
read requests per second. Please see this example for more clarity on prefixes: if you have
a file f1 stored in an S3 object path like so
s3://your_bucket_name/folder1/sub_folder_1/f1, then /folder1/sub_folder_1/ becomes the
prefix for file f1.

Some data lake applications on Amazon S3 scan millions or billions of objects for queries
that run over petabytes of data. These data lake applications achieve single-instance
transfer rates that maximize the network interface used for their Amazon EC2 instance,
which can be up to 100 Gb/s on a single instance. These applications then aggregate
throughput across multiple instances to get multiple terabits per second. Therefore
creating customer-specific custom prefixes within the single bucket and then uploading
the daily files into those prefixed locations is the BEST solution for the given constraints.
31. An Electronic Design Automation (EDA) application produces massive *0/1
volumes of data that can be divided into two categories. The 'hot data'
needs to be both processed and stored quickly in a parallel and
distributed fashion. The 'cold data' needs to be kept for reference with
quick access for reads and updates at a low cost.

Which of the following AWS services is BEST suited to accelerate the


aforementioned chip design process?

Amazon FSx for Windows File Server

Amazon EMR

Amazon FSx for Lustre

AWS Glue

Correct answer

Amazon FSx for Lustre

Feedback

Amazon FSx for Windows File Server - Amazon FSx for Windows File Server provides fully
managed, highly reliable file storage that is accessible over the industry-standard Service
Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of
administrative features such as user quotas, end-user file restore, and Microsoft Active
Directory (AD) integration. FSx for Windows does not allow you to present S3 objects as
files and does not allow you to write changed data back to S3. Therefore you cannot
reference the "cold data" with quick access for reads and updates at low cost. Hence this
option is not correct.

Amazon EMR - Amazon EMR is the industry-leading cloud big data platform for processing
vast amounts of data using open source tools such as Apache Spark, Apache Hive,
Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR uses Hadoop, an
open-source framework, to distribute your data and processing across a resizable cluster
of Amazon EC2 instances. EMR does not offer the same storage and processing speed as
FSx for Lustre. So it is not the right fit for the given high-performance workflow scenario.

AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that
makes it easy for customers to prepare and load their data for analytics. AWS Glue job is
meant to be used for batch ETL data processing. AWS Glue does not offer the same
storage and processing speed as FSx for Lustre. So it is not the right fit for the given high-
performance workflow scenario.

References:

https://aws.amazon.com/fsx/lustre/
https://aws.amazon.com/fsx/windows/faqs/
32. A gaming company is developing a mobile game that streams score *1/1
updates to a backend processor and then publishes results on a
leaderboard. The company has hired you as an AWS Certified Solutions
Architect Associate to design a solution that can handle major traffic
spikes, process the mobile game updates in the order of receipt, and
store the processed updates in a highly available database. The company
wants to minimize the management overhead required to maintain the
solution.

Which of the following will you recommend to meet these requirements?

Push score updates to Amazon Kinesis Data Streams which uses an AWS
Lambda function to process these updates and then store these processed
updates in Amazon DynamoDB

Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic,
subscribe an AWS Lambda function to this Amazon SNS topic to process the
updates and then store these processed updates in a SQL database running on
Amazon EC2 instance

Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue
which uses a fleet of Amazon EC2 instances (with Auto Scaling) to process these
updates in the Amazon SQS queue and then store these processed updates in an
Amazon RDS MySQL database

Push score updates to Amazon Kinesis Data Streams which uses a fleet of Amazon
EC2 instances (with Auto Scaling) to process the updates in Amazon Kinesis Data
Streams and then store these processed updates in Amazon DynamoDB

Feedback

Push score updates to Amazon Kinesis Data Streams which uses an AWS Lambda
function to process these updates and then store these processed updates in Amazon
DynamoDB

To help ingest real-time data or streaming data at large scales, you can use Amazon
Kinesis Data Streams (KDS). KDS can continuously capture gigabytes of data per second
from hundreds of thousands of sources. The data collected is available in milliseconds,
enabling real-time analytics. KDS provides ordering of records, as well as the ability to read
and/or replay records in the same order to multiple Amazon Kinesis Applications.

AWS Lambda integrates natively with Kinesis Data Streams. The polling, checkpointing,
and error handling complexities are abstracted when you use this native integration. The
processed data can then be configured to be saved in Amazon DynamoDB.
33. A financial services company recently launched an initiative to *0/1
improve the security of its AWS resources and it had enabled AWS Shield
Advanced across multiple AWS accounts owned by the company. Upon
analysis, the company has found that the costs incurred are much higher
than expected.

Which of the following would you attribute as the underlying reason for
the unexpectedly high costs for AWS Shield Advanced service?

AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in
increased costs

AWS Shield Advanced is being used for custom servers, that are not part of AWS
Cloud, thereby resulting in increased costs

Consolidated billing has not been enabled. All the AWS accounts should fall under a
single consolidated billing for the monthly fee to be charged only once

Savings Plans has not been enabled for the AWS Shield Advanced service
across all the AWS accounts

Correct answer

Consolidated billing has not been enabled. All the AWS accounts should fall under a
single consolidated billing for the monthly fee to be charged only once

Feedback

AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud,
thereby resulting in increased costs - AWS Shield Advanced does offer protection to
resources outside of AWS. This should not cause unexpected spike in billing costs.

AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in
increased costs - AWS Shield Standard is automatically enabled for all AWS customers at
no additional cost. AWS Shield Advanced is an optional paid service.

Savings Plans has not been enabled for the AWS Shield Advanced service across all the
AWS accounts - This option has been added as a distractor. Savings Plans is a flexible
pricing model that offers low prices on Amazon EC2 instances, AWS Lambda, and AWS
Fargate usage, in exchange for a commitment to a consistent amount of usage (measured
in $/hour) for a 1 or 3 year term. Savings Plans is not applicable for the AWS Shield
Advanced service.

References:

https://aws.amazon.com/shield/faqs/

https://aws.amazon.com/savingsplans/faq/
34. Which of the following feature of an Amazon S3 bucket can only be *1/1
suspended and not disabled once it have been enabled?

Requester Pays

Versioning

Static Website Hosting

Server Access Logging

Feedback

Versioning

Once you version-enable a bucket, it can never return to an unversioned state. Versioning
can only be suspended once it has been enabled.
35. A leading social media analytics company is contemplating moving *1/1
its dockerized application stack into AWS Cloud. The company is not sure
about the pricing for using Amazon Elastic Container Service (Amazon
ECS) with the EC2 launch type compared to the Amazon Elastic Container
Service (Amazon ECS) with the Fargate launch type.

Which of the following is correct regarding the pricing for these two
services?

Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type
are charged based on Amazon EC2 instances and Amazon EBS Elastic Volumes
used

Amazon ECS with EC2 launch type is charged based on EC2 instances and EBS
volumes used. Amazon ECS with Fargate launch type is charged based on
vCPU and memory resources that the containerized application requests

Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type
are just charged based on Elastic Container Service used per hour

Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type
are charged based on vCPU and memory resources that the containerized
application requests

Feedback

Amazon ECS with EC2 launch type is charged based on EC2 instances and EBS volumes
used. Amazon ECS with Fargate launch type is charged based on vCPU and memory
resources that the containerized application requests

Amazon Elastic Container Service (Amazon ECS) is a fully managed container


orchestration service. ECS allows you to easily run, scale, and secure Docker container
applications on AWS.

With the Fargate launch type, you pay for the amount of vCPU and memory resources that
your containerized application requests. vCPU and memory resources are calculated from
the time your container images are pulled until the Amazon ECS Task terminates, rounded
up to the nearest second. With the EC2 launch type, there is no additional charge for the
EC2 launch type. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you
create to store and run your application.
36. While consolidating logs for the weekly reporting, a development *1/1
team at an e-commerce company noticed that an unusually large number
of illegal AWS application programming interface (API) queries were
made sometime during the week. Due to the off-season, there was no
visible impact on the systems. However, this event led the management
team to seek an automated solution that can trigger near-real-time
warnings in case such an event recurs.

Which of the following represents the best solution for the given
scenario?

Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs
having API call details and looks at any errors by factoring in all the error codes
that need to be tracked. Create an alarm based on this metric's rate to send an
Amazon SNS notification to the required team

AWS Trusted Advisor publishes metrics about check results to Amazon


CloudWatch. Create an alarm to track status changes for checks in the Service
Limits category for the APIs. The alarm will then notify when the service quota is
reached or exceeded

Run Amazon Athena SQL queries against AWS CloudTrail log files stored in
Amazon S3 buckets. Use Amazon QuickSight to generate reports for managerial
dashboards

Configure AWS CloudTrail to stream event data to Amazon Kinesis. Use Amazon
Kinesis stream-level metrics in the Amazon CloudWatch to trigger an AWS Lambda
function that will trigger an error workflow

Feedback

Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs having
API call details and looks at any errors by factoring in all the error codes that need to be
tracked. Create an alarm based on this metric's rate to send an Amazon SNS notification
to the required team

AWS CloudTrail log data can be ingested into Amazon CloudWatch to monitor and identify
your AWS account activity against security threats, and create a governance framework for
security best practices. You can analyze log trail event data in CloudWatch using features
such as Logs Insight, Contributor Insights, Metric filters, and CloudWatch Alarms.

AWS CloudTrail integrates with the Amazon CloudWatch service to publish the API calls
being made to resources or services in the AWS account. The published event has
invaluable information that can be used for compliance, auditing, and governance of your
AWS accounts. Below we introduce several features available in CloudWatch to monitor
API activity, analyze the logs at scale, and take action when malicious activity is
discovered, without provisioning your infrastructure.
For the AWS Cloudtrail logs available in Amazon CloudWatch Logs, you can begin
searching and filtering the log data by creating one or more metric filters. Use these metric
filters to turn log data into numerical CloudWatch metrics that you can graph or set a
CloudWatch Alarm on.

Note: AWS CloudTrail Insights helps AWS users identify and respond to unusual activity
associated with write API calls by continuously analyzing CloudTrail management events.

Insights events are logged when AWS CloudTrail detects unusual write management API
activity in your account. If you have AWS CloudTrail Insights enabled and CloudTrail
detects unusual activity, Insights events are delivered to the destination Amazon S3
bucket for your trail. You can also see the type of insight and the incident time when you
view Insights events on the CloudTrail console. Unlike other types of events captured in a
CloudTrail trail, Insights events are logged only when CloudTrail detects changes in your
account's API usage that differ significantly from the account's typical usage patterns.
37. A company has a web application that runs 24*7 in the production *1/1
environment. The development team at the company runs a clone of the
same application in the dev environment for up to 8 hours every day. The
company wants to build the MOST cost-optimal solution by deploying
these applications using the best-fit pricing options for Amazon Elastic
Compute Cloud (Amazon EC2) instances.

What would you recommend?

Use on-demand Amazon EC2 instances for the production application and spot
instances for the dev application

Use Amazon EC2 reserved instance (RI) for the production application and on-
demand instances for the dev application

Use Amazon EC2 reserved instance (RI) for the production application and spot
block instances for the dev application

Use Amazon EC2 reserved instance (RI) for the production application and spot
instances for the dev application

Feedback

Use Amazon EC2 reserved instance (RI) for the production application and on-demand
instances for the dev application

There are multiple pricing options for EC2 instances, such as On-Demand, Savings Plans,
Reserved Instances, and Spot Instances.

Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%) compared
to On-Demand pricing and provide a capacity reservation when used in a specific
Availability Zone. RIs provide you with a significant discount (up to 72%) compared to On-
Demand instance pricing. You have the flexibility to change families, OS types, and
tenancies while benefitting from RI pricing when you use Convertible RIs.

For the given use case, you can use Amazon EC2 Reserved Instances for the production
application as it is run 24*7. This way you can get a 72% discount if you avail a 3-year
term. You can use on-demand instances for the dev application since it is only used for up
to 8 hours per day. On-demand offers the flexibility to only pay for the Amazon EC2
instance when it is being used (0 to 8 hours for the given use case).
38. A major bank is using Amazon Simple Queue Service (Amazon SQS) *0/1
to migrate several core banking applications to the cloud to ensure high
availability and cost efficiency while simplifying administrative complexity
and overhead. The development team at the bank expects a peak rate of
about 1000 messages per second to be processed via SQS. It is
important that the messages are processed in order.

Which of the following options can be used to implement this system?

Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 2 messages


per operation to process the messages at the peak rate

Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per
operation to process the messages at the peak rate

Use Amazon SQS standard queue to process the messages

Use Amazon SQS FIFO (First-In-First-Out) queue to process the messages

Correct answer

Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per
operation to process the messages at the peak rate

Feedback

Use Amazon SQS standard queue to process the messages - As messages need to be
processed in order, therefore standard queues are ruled out.

Use Amazon SQS FIFO (First-In-First-Out) queue to process the messages - By default,
FIFO queues support up to 300 messages per second and this is not sufficient to meet the
message processing throughput per the given use-case. Hence this option is incorrect.

Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 2 messages per
operation to process the messages at the peak rate - As mentioned earlier in the
explanation, you need to use FIFO queues in batch mode and process 4 messages per
operation, so that the FIFO queue can support up to 1200 messages per second. With 2
messages per operation, you can only support up to 600 messages per second.

References:

https://aws.amazon.com/sqs/

https://aws.amazon.com/sqs/features/
39. The engineering team at an e-commerce company wants to establish *1/1
a dedicated, encrypted, low latency, and high throughput connection
between its data center and AWS Cloud. The engineering team has set
aside sufficient time to account for the operational overhead of
establishing this connection.

As a solutions architect, which of the following solutions would you


recommend to the company?

Use AWS site-to-site VPN to establish a connection between the data center and
AWS Cloud

Use AWS Direct Connect to establish a connection between the data center and
AWS Cloud

Use AWS Transit Gateway to establish a connection between the data center and
AWS Cloud

Use AWS Direct Connect plus virtual private network (VPN) to establish a
connection between the data center and AWS Cloud

Feedback

Use AWS Direct Connect plus virtual private network (VPN) to establish a connection
between the data center and AWS Cloud

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated
network connection from your premises to AWS. AWS Direct Connect lets you establish a
dedicated network connection between your network and one of the AWS Direct Connect
locations.

With AWS Direct Connect plus VPN, you can combine one or more AWS Direct Connect
dedicated network connections with the Amazon VPC VPN. This combination provides an
IPsec-encrypted private connection that also reduces network costs, increases bandwidth
throughput, and provides a more consistent network experience than internet-based VPN
connections.

This solution combines the AWS managed benefits of the VPN solution with low latency,
increased bandwidth, more consistent benefits of the AWS Direct Connect solution, and an
end-to-end, secure IPsec connection. Therefore, AWS Direct Connect plus VPN is the
correct solution for this use-case.
40. A telecom company operates thousands of hardware devices like *0/1
switches, routers, cables, etc. The real-time status data for these devices
must be fed into a communications application for notifications.
Simultaneously, another analytics application needs to read the same
real-time status data and analyze all the connecting lines that may go
down because of any device failures.

As an AWS Certified Solutions Architect – Associate, which of the


following solutions would you suggest, so that both the applications can
consume the real-time status data concurrently?

Amazon Simple Notification Service (SNS)

Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon
SES)

Amazon Kinesis Data Streams

Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service
(SNS)

Correct answer

Amazon Kinesis Data Streams

Feedback

Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service (SNS) is a
highly available, durable, secure, fully managed pub/sub messaging service that enables
you to decouple microservices, distributed systems, and serverless applications. Amazon
SNS provides topics for high-throughput, push-based, many-to-many messaging. Amazon
SNS is a notification service and cannot be used for real-time processing of data.

Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS) -
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted
queue for storing messages as they travel between computers. Amazon SQS lets you
easily move data between distributed application components and helps you build
applications in which messages are processed independently (with message-level ack/fail
semantics), such as automated workflows. Since multiple applications need to consume
the same data stream concurrently, Kinesis is a better choice when compared to the
combination of SQS with SNS.

Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon SES) -
As discussed above, Amazon Kinesis is a better option for this use case in comparison to
Amazon SQS. Also, Amazon SES does not fit this use-case. Hence, this option is an
incorrect answer.

Reference:
https://aws.amazon.com/kinesis/data-streams/faqs/
41. The DevOps team at an e-commerce company wants to perform *0/1
some maintenance work on a specific Amazon EC2 instance that is part
of an Auto Scaling group using a step scaling policy. The team is facing a
maintenance challenge - every time the team deploys a maintenance
patch, the instance health check status shows as out of service for a few
minutes. This causes the Auto Scaling group to provision another
replacement instance immediately.

As a solutions architect, which are the MOST time/resource efficient


steps that you would recommend so that the maintenance work can be
completed at the earliest? (Select two)

Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply
the maintenance patch to the instance. Once the instance is ready, you can
manually set the instance's health status back to healthy and activate the
ReplaceUnhealthy process type again

Take a snapshot of the instance, create a new Amazon Machine Image (AMI) and
then launch a new instance using this AMI. Apply the maintenance patch to this
new instance and then add it back to the Auto Scaling Group by using the manual
scaling policy. Terminate the earlier instance that had the maintenance issue

Put the instance into the Standby state and then update the instance by
applying the maintenance patch. Once the instance is ready, you can exit the
Standby state and then return the instance to service

Delete the Auto Scaling group and apply the maintenance fix to the given instance.
Create a new Auto Scaling group and add all the instances again using the manual
scaling policy

Suspend the ScheduledActions process type for the Auto Scaling group and
apply the maintenance patch to the instance. Once the instance is ready, you
can you can manually set the instance's health status back to healthy and
activate the ScheduledActions process type again

Correct answer

Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply
the maintenance patch to the instance. Once the instance is ready, you can
manually set the instance's health status back to healthy and activate the
ReplaceUnhealthy process type again

Put the instance into the Standby state and then update the instance by applying
the maintenance patch. Once the instance is ready, you can exit the Standby state
and then return the instance to service
Feedback

Take a snapshot of the instance, create a new Amazon Machine Image (AMI) and then
launch a new instance using this AMI. Apply the maintenance patch to this new instance
and then add it back to the Auto Scaling Group by using the manual scaling policy.
Terminate the earlier instance that had the maintenance issue - Taking the snapshot of the
existing instance to create a new AMI and then creating a new instance in order to apply
the maintenance patch is not time/resource optimal, hence this option is ruled out.

Delete the Auto Scaling group and apply the maintenance fix to the given instance. Create
a new Auto Scaling group and add all the instances again using the manual scaling policy -
It's not recommended to delete the Auto Scaling group just to apply a maintenance patch
on a specific instance.

Suspend the ScheduledActions process type for the Auto Scaling group and apply the
maintenance patch to the instance. Once the instance is ready, you can you can manually
set the instance's health status back to healthy and activate the ScheduledActions
process type again - Amazon EC2 Auto Scaling does not execute scaling actions that are
scheduled to run during the suspension period. This option is not relevant to the given use-
case.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-
processes.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/health-checks-overview.html
42. A large financial institution operates an on-premises data center with *1/1
hundreds of petabytes of data managed on Microsoft’s Distributed File
System (DFS). The CTO wants the organization to transition into a hybrid
cloud environment and run data-intensive analytics workloads that
support DFS.

Which of the following AWS services can facilitate the migration of these
workloads?

Amazon FSx for Windows File Server

AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)

Amazon FSx for Lustre

Microsoft SQL Server on AWS

Feedback

Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides fully managed, highly reliable file storage
that is accessible over the industry-standard Service Message Block (SMB) protocol. It is
built on Windows Server, delivering a wide range of administrative features such as user
quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx
supports the use of Microsoft’s Distributed File System (DFS) to organize shares into a
single folder structure up to hundreds of PB in size. So this option is correct.
43. A retail company's dynamic website is hosted using on-premises *1/1
servers in its data center in the United States. The company is launching
its website in Asia, and it wants to optimize the website loading times for
new users in Asia. The website's backend must remain in the United
States. The website is being launched in a few days, and an immediate
solution is needed.

What would you recommend?

Use Amazon CloudFront with a custom origin pointing to the DNS record of the
website on Amazon Route 53

Use Amazon CloudFront with a custom origin pointing to the on-premises


servers

Migrate the website to Amazon S3. Use S3 cross-region replication (S3 CRR)
between AWS Regions in the US and Asia

Leverage a Amazon Route 53 geo-proximity routing policy pointing to on-premises


servers

Feedback

Use Amazon CloudFront with a custom origin pointing to the on-premises servers

Amazon CloudFront is a web service that gives businesses and web application
developers an easy and cost-effective way to distribute content with low latency and high
data transfer speeds. Amazon CloudFront uses standard cache control headers you set on
your files to identify static and dynamic content. You can use different origins for different
types of content on a single site – e.g. Amazon S3 for static objects, Amazon EC2 for
dynamic content, and custom origins for third-party content.

An origin server stores the original, definitive version of your objects. If you're serving
content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server,
such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud
(Amazon EC2) instance or on a server that you manage; these servers are also known as
custom origins.

Amazon CloudFront employs a global network of edge locations and regional edge caches
that cache copies of your content close to your viewers. Amazon CloudFront ensures that
end-user requests are served by the closest edge location. As a result, viewer requests
travel a short distance, improving performance for your viewers. Therefore for the given
use case, the users in Asia will enjoy a low latency experience while using the website
even though the on-premises servers continue to be in the US.
44. A company uses Amazon DynamoDB as a data store for various *1/1
kinds of customer data, such as user profiles, user events, clicks, and
visited links. Some of these use-cases require a high request rate
(millions of requests per second), low predictable latency, and reliability.
The company now wants to add a caching layer to support high read
volumes.

As a solutions architect, which of the following AWS services would you


recommend as a caching layer for this use-case? (Select two)

Amazon Relational Database Service (Amazon RDS)

Amazon Redshift

Amazon ElastiCache

Amazon DynamoDB Accelerator (DAX)

Amazon OpenSearch Service

Feedback

Amazon DynamoDB Accelerator (DAX)

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory


cache for DynamoDB that delivers up to a 10x performance improvement – from
milliseconds to microseconds – even at millions of requests per second. DAX does all the
heavy lifting required to add in-memory acceleration to your DynamoDB tables, without
requiring developers to manage cache invalidation, data population, or cluster
management. Therefore, this is a correct option.
45. One of the biggest football leagues in Europe has granted the *1/1
distribution rights for live streaming its matches in the USA to a silicon
valley based streaming services company. As per the terms of
distribution, the company must make sure that only users from the USA
are able to live stream the matches on their platform. Users from other
countries in the world must be denied access to these live-streamed
matches.

Which of the following options would allow the company to enforce these
streaming restrictions? (Select two)

Use Amazon Route 53 based failover routing policy to restrict distribution of


content to only the locations in which you have distribution rights

Use georestriction to prevent users in specific geographic locations from


accessing content that you're distributing through a Amazon CloudFront web
distribution

Use Amazon Route 53 based weighted routing policy to restrict distribution of


content to only the locations in which you have distribution rights

Use Amazon Route 53 based geolocation routing policy to restrict distribution


of content to only the locations in which you have distribution rights

Use Amazon Route 53 based latency-based routing policy to restrict distribution of


content to only the locations in which you have distribution rights

Feedback

Use Amazon Route 53 based geolocation routing policy to restrict distribution of content
to only the locations in which you have distribution rights

Geolocation routing lets you choose the resources that serve your traffic based on the
geographic location of your users, meaning the location that DNS queries originate from.
For example, you might want all queries from Europe to be routed to an ELB load balancer
in the Frankfurt region. You can also use geolocation routing to restrict the distribution of
content to only the locations in which you have distribution rights.

Use georestriction to prevent users in specific geographic locations from accessing


content that you're distributing through a Amazon CloudFront web distribution

You can use georestriction, also known as geo-blocking, to prevent users in specific
geographic locations from accessing content that you're distributing through a Amazon
CloudFront web distribution. When a user requests your content, Amazon CloudFront
typically serves the requested content regardless of where the user is located. If you need
to prevent users in specific countries from accessing your content, you can use the
CloudFront geo restriction feature to do one of the following: Allow your users to access
your content only if they're in one of the countries on a whitelist of approved countries.
Prevent your users from accessing your content if they're in one of the countries on a
blacklist of banned countries. So this option is also correct.
46. A new DevOps engineer has just joined a development team and *1/1
wants to understand the replication capabilities for Amazon RDS Multi-AZ
deployment as well as Amazon RDS Read-replicas.

Which of the following correctly summarizes these capabilities for the


given database?

Multi-AZ follows asynchronous replication and spans one Availability Zone (AZ)
within a single region. Read replicas follow synchronous replication and can be
within an Availability Zone (AZ), Cross-AZ, or Cross-Region

Multi-AZ follows asynchronous replication and spans at least two Availability Zones
(AZs) within a single region. Read replicas follow asynchronous replication and can
be within an Availability Zone (AZ), Cross-AZ, or Cross-Region

Multi-AZ follows synchronous replication and spans at least two Availability


Zones (AZs) within a single region. Read replicas follow asynchronous
replication and can be within an Availability Zone (AZ), Cross-AZ, or Cross-
Region

Multi-AZ follows asynchronous replication and spans at least two Availability Zones
(AZs) within a single region. Read replicas follow synchronous replication and can
be within an Availability Zone (AZ), Cross-AZ, or Cross-Region

Feedback

Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs)
within a single region. Read replicas follow asynchronous replication and can be within an
Availability Zone (AZ), Cross-AZ, or Cross-Region

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS
database (DB) instances, making them a natural fit for production database workloads.
When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary
DB Instance and synchronously replicates the data to a standby instance in a different
Availability Zone (AZ). Multi-AZ spans at least two Availability Zones (AZs) within a single
region.

Amazon RDS Read Replicas provide enhanced performance and durability for RDS
database (DB) instances. They make it easy to elastically scale out beyond the capacity
constraints of a single DB instance for read-heavy database workloads. For the MySQL,
MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a
second DB instance using a snapshot of the source DB instance. It then uses the engines'
native asynchronous replication to update the read replica whenever there is a change to
the source DB instance.

Amazon RDS replicates all databases in the source DB instance. Read replicas can be
within an Availability Zone (AZ), Cross-AZ, or Cross-Region.
47. The sourcing team at the US headquarters of a global e-commerce *1/1
company is preparing a spreadsheet of the new product catalog. The
spreadsheet is saved on an Amazon Elastic File System (Amazon EFS)
created in us-east-1 region. The sourcing team counterparts from other
AWS regions such as Asia Pacific and Europe also want to collaborate on
this spreadsheet.

As a solutions architect, what is your recommendation to enable this


collaboration with the LEAST amount of operational overhead?

The spreadsheet will have to be copied in Amazon S3 which can then be accessed
from any AWS region

The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be
accessed in other AWS regions by using an inter-region VPC peering connection

The spreadsheet will have to be copied into Amazon EFS file systems of other AWS
regions as Amazon EFS is a regional service and it does not allow access from
other AWS regions

The spreadsheet data will have to be moved into an Amazon RDS for MySQL
database which can then be accessed from any AWS region

Feedback

The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be accessed in
other AWS regions by using an inter-region VPC peering connection

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed
elastic NFS file system for use with AWS Cloud services and on-premises resources.

Amazon EFS is a regional service storing data within and across multiple Availability
Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file
system across AZs, regions, and VPCs, while on-premises servers can access using AWS
Direct Connect or AWS VPN.

You can connect to Amazon EFS file systems from EC2 instances in other AWS regions
using an inter-region VPC peering connection, and from on-premises servers using an AWS
VPN connection. So this is the correct option.
48. A company runs a data processing workflow that takes about 60 *1/1
minutes to complete. The workflow can withstand disruptions and it can
be started and stopped multiple times.

Which is the most cost-effective solution to build a solution for the


workflow?

Use Amazon EC2 on-demand instances to run the workflow processes

Use Amazon EC2 spot instances to run the workflow processes

Use Amazon EC2 reserved instances to run the workflow processes

Use AWS Lambda function to run the workflow processes

Feedback

Use Amazon EC2 spot instances to run the workflow processes

Amazon EC2 Spot instances allow you to request spare Amazon EC2 computing capacity
for up to 90% off the On-Demand price.

Spot instances are recommended for:

Applications that have flexible start and end times Applications that are feasible only at
very low compute prices Users with urgent computing needs for large amounts of
additional capacity

For the given use case, spot instances offer the most cost-effective solution as the
workflow can withstand disruptions and can be started and stopped multiple times.

For example, considering a process that runs for an hour and needs about 1024 MB of
memory, spot instance pricing for a t2.micro instance (having 1024 MB of RAM) is
$0.0035 per hour.

Contrast this with the pricing of a Lambda function (having 1024 MB of allocated
memory), which comes out to $0.0000000167 per 1ms or $0.06 per hour ($0.0000000167
* 1000 * 60 * 60 per hour).

Thus, a spot instance turns out to be about 20 times cost effective than a Lambda
function to meet the requirements of the given use case.
49. The flagship application for a gaming company connects to an *1/1
Amazon Aurora database and the entire technology stack is currently
deployed in the United States. Now, the company has plans to expand to
Europe and Asia for its operations. It needs the games table to be
accessible globally but needs the users and games_played tables to be
regional only.

How would you implement this with minimal application refactoring?

Use a Amazon DynamoDB global table for the games table and use Amazon Aurora
for the users and games_played tables

Use an Amazon Aurora Global Database for the games table and use Amazon
DynamoDB tables for the users and games_played tables

Use an Amazon Aurora Global Database for the games table and use Amazon
Aurora for the users and games_played tables

Use a Amazon DynamoDB global table for the games table and use Amazon
DynamoDB tables for the users and games_played tables

Feedback

Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for
the users and games_played tables

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the
cloud, that combines the performance and availability of traditional enterprise databases
with the simplicity and cost-effectiveness of open source databases. Amazon Aurora
features a distributed, fault-tolerant, self-healing storage system that auto-scales up to
128TB per database instance. Aurora is not an in-memory database.

Amazon Aurora Global Database is designed for globally distributed applications, allowing
a single Amazon Aurora database to span multiple AWS regions. It replicates your data
with no impact on database performance, enables fast local reads with low latency in
each region, and provides disaster recovery from region-wide outages. Amazon Aurora
Global Database is the correct choice for the given use-case.

For the given use-case, we, therefore, need to have two Aurora clusters, one for the global
table (games table) and the other one for the local tables (users and games_played
tables).
50. A healthcare startup needs to enforce compliance and regulatory *1/1
guidelines for objects stored in Amazon S3. One of the key requirements
is to provide adequate protection against accidental deletion of objects.

As a solutions architect, what are your recommendations to address


these guidelines? (Select two) ?

Enable versioning on the Amazon S3 bucket

Establish a process to get managerial approval for deleting Amazon S3 objects

Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket

Change the configuration on Amazon S3 console so that the user needs to provide
additional confirmation while deleting any Amazon S3 object

Create an event trigger on deleting any Amazon S3 object. The event invokes an
Amazon Simple Notification Service (Amazon SNS) notification via email to the IT
manager

Feedback

Enable versioning on the Amazon S3 bucket

Versioning is a means of keeping multiple variants of an object in the same bucket. You
can use versioning to preserve, retrieve, and restore every version of every object stored in
your Amazon S3 bucket. Versioning-enabled buckets enable you to recover objects from
accidental deletion or overwrite.
For example:
If you overwrite an object, it results in a new object version in the bucket. You can always
restore the previous version. If you delete an object, instead of removing it permanently,
Amazon S3 inserts a delete marker, which becomes the current object version. You can
always restore the previous version. Hence, this is the correct option.

Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket

To provide additional protection, multi-factor authentication (MFA) delete can be enabled.


MFA delete requires secondary authentication to take place before objects can be
permanently deleted from an Amazon S3 bucket. Hence, this is the correct option.
51. A video analytics organization has been acquired by a leading media *1/1
company. The analytics organization has 10 independent applications
with an on-premises data footprint of about 70 Terabytes for each
application. The CTO of the media company has set a timeline of two
weeks to carry out the data migration from on-premises data center to
AWS Cloud and establish connectivity.

Which of the following are the MOST cost-effective options for


completing the data transfer and establishing connectivity? (Select two)

Setup AWS Site-to-Site VPN to establish on-going connectivity between the on-
premises data center and AWS Cloud

Order 10 AWS Snowball Edge Storage Optimized devices to complete the one-
time data transfer

Order 70 AWS Snowball Edge Storage Optimized devices to complete the one-time
data transfer

Order 1 AWS Snowmobile to complete the one-time data transfer

Setup AWS Direct Connect to establish connectivity between the on-premises data
center and AWS Cloud

Feedback

Order 10 AWS Snowball Edge Storage Optimized devices to complete the one-time data
transfer

AWS Snowball Edge Storage Optimized is the optimal choice if you need to securely and
quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80
Terabytes of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40
Gigabytes network connectivity to address large scale data transfer and pre-processing
use cases.

As each Snowball Edge Storage Optimized device can handle 80 Terabytes of data, you
can order 10 such devices to take care of the data transfer for all applications.

Exam Alert:

The original Snowball devices were transitioned out of service and Snowball Edge Storage
Optimized are now the primary devices used for data transfer. You may see the Snowball
device on the exam, just remember that the original Snowball device had 80 Terabytes of
storage space.

Setup AWS Site-to-Site VPN to establish on-going connectivity between the on-premises
data center and AWS Cloud

AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch
office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend
your data center or branch office network to the cloud with an AWS Site-to-Site VPN
connection. A VPC VPN Connection utilizes IPSec to establish encrypted network
connectivity between your intranet and Amazon VPC over the Internet. VPN Connections
can be configured in minutes and are a good solution if you have an immediate need, have
low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-
based connectivity.

Therefore this option is the right fit for the given use-case as the connectivity can be easily
established within the given timeframe.
52. The payroll department at a company initiates several *1/1
computationally intensive workloads on Amazon EC2 instances at a
designated hour on the last day of every month. The payroll department
has noticed a trend of severe performance lag during this hour. The
engineering team has figured out a solution by using Auto Scaling Group
for these Amazon EC2 instances and making sure that 10 Amazon EC2
instances are available during this peak usage hour. For normal
operations only 2 Amazon EC2 instances are enough to cater to the
workload.
As a solutions architect, which of the following steps would you
recommend to implement the solution?

Configure your Auto Scaling group by creating a target tracking policy and setting
the instance count to 10 at the designated hour. This causes the scale-out to
happen before peak traffic kicks in at the designated hour

Configure your Auto Scaling group by creating a scheduled action that kicks-off at
the designated hour on the last day of the month. Set the min count as well as the
max count of instances to 10. This causes the scale-out to happen before peak
traffic kicks in at the designated hour

Configure your Auto Scaling group by creating a scheduled action that kicks-off
at the designated hour on the last day of the month. Set the desired capacity of
instances to 10. This causes the scale-out to happen before peak traffic kicks in
at the designated hour

Configure your Auto Scaling group by creating a simple tracking policy and setting
the instance count to 10 at the designated hour. This causes the scale-out to
happen before peak traffic kicks in at the designated hour

Feedback

Configure your Auto Scaling group by creating a scheduled action that kicks-off at the
designated hour on the last day of the month. Set the desired capacity of instances to 10.
This causes the scale-out to happen before peak traffic kicks in at the designated hour

Scheduled scaling allows you to set your own scaling schedule. For example, let's say that
every week the traffic to your web application starts to increase on Wednesday, remains
high on Thursday, and starts to decrease on Friday. You can plan your scaling actions
based on the predictable traffic patterns of your web application. Scaling actions are
performed automatically as a function of time and date.

A scheduled action sets the minimum, maximum, and desired sizes to what is specified by
the scheduled action at the time specified by the scheduled action. For the given use case,
the correct solution is to set the desired capacity to 10. When we want to specify a range
of instances, then we must use min and max values.

53. A software engineering intern at an e-commerce company is *1/1


documenting the process flow to provision Amazon EC2 instances via the
Amazon EC2 API. These instances are to be used for an internal
application that processes Human Resources payroll data. He wants to
highlight those volume types that cannot be used as a boot volume.

Can you help the intern by identifying those storage volume types that
CANNOT be used as boot volumes while creating the instances? (Select
two)

Cold Hard disk drive (sc1)

Provisioned IOPS Solid state drive (io1)

Instance Store

General Purpose Solid State Drive (gp2)

Throughput Optimized Hard disk drive (st1)

Feedback

Throughput Optimized Hard disk drive (st1)

Cold Hard disk drive (sc1)

The Amazon EBS volume types fall into two categories:

Solid state drive (SSD) backed volumes optimized for transactional workloads involving
frequent read/write operations with small I/O size, where the dominant performance
attribute is IOPS.

Hard disk drive (HDD) backed volumes optimized for large streaming workloads where
throughput (measured in MiB/s) is a better performance measure than IOPS.

Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types CANNOT be used as a
boot volume, so these two options are correct.
54. A development team requires permissions to list an Amazon S3 *1/1
bucket and delete objects from that bucket. A systems administrator has
created the following IAM policy to provide access to the bucket and
applied that policy to the group. The group is not able to delete objects in
the bucket. The company follows the principle of least privilege.

Option 1 Option 2
Option 3 Option 4

Feedback

{
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
}

The main elements of a policy statement are:

Effect: Specifies whether the statement will Allow or Deny an action (Allow is the effect
defined here).

Action: Describes a specific action or actions that will either be allowed or denied to run
based on the Effect entered. API actions are unique to each service (DeleteObject is the
action defined here).

Resource: Specifies the resources—for example, an Amazon S3 bucket or objects—that the


policy applies to in Amazon Resource Name (ARN) format ( example-bucket/* is the
resource defined here).

This policy provides the necessary delete permissions on the resources of the Amazon S3
bucket to the group.
55. A data analytics company measures what the consumers watch and *1/1
what advertising they’re exposed to. This real-time data is ingested into
its on-premises data center and subsequently, the daily data feed is
compressed into a single file and uploaded on Amazon S3 for backup.
The typical compressed file size is around 2 gigabytes.

Which of the following is the fastest way to upload the daily compressed
file into Amazon S3?

Upload the compressed file using multipart upload

Upload the compressed file in a single operation

FTP the compressed file into an Amazon EC2 instance that runs in the same region
as the Amazon S3 bucket. Then transfer the file from the Amazon EC2 instance into
the Amazon S3 bucket

Upload the compressed file using multipart upload with Amazon S3 Transfer
Acceleration (Amazon S3TA)

Feedback

Upload the compressed file using multipart upload with Amazon S3 Transfer Acceleration
(Amazon S3TA)

Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers
of files over long distances between your client and an S3 bucket. Transfer Acceleration
takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data
arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

Multipart upload allows you to upload a single object as a set of parts. Each part is a
contiguous portion of the object's data. You can upload these object parts independently
and in any order. If transmission of any part fails, you can retransmit that part without
affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles
these parts and creates the object. If you're uploading large objects over a stable high-
bandwidth network, use multipart uploading to maximize the use of your available
bandwidth by uploading object parts in parallel for multi-threaded performance. If you're
uploading over a spotty network, use multipart uploading to increase resiliency to network
errors by avoiding upload restarts.
56. The IT department at a consulting firm is conducting a training *1/1
workshop for new developers. As part of an evaluation exercise on
Amazon S3, the new developers were asked to identify the invalid storage
class lifecycle transitions for objects stored on Amazon S3.

Can you spot the INVALID lifecycle transitions from the options below?
(Select two)

Amazon S3 Standard-IA => Amazon S3 One Zone-IA

Amazon S3 Intelligent-Tiering => Amazon S3 Standard

Amazon S3 One Zone-IA => Amazon S3 Standard-IA

Amazon S3 Standard => Amazon S3 Intelligent-Tiering

Amazon S3 Standard-IA => Amazon S3 Intelligent-Tiering

Feedback

As the question wants to know about the INVALID lifecycle transitions, the following
options are the correct answers -

Amazon S3 Intelligent-Tiering => Amazon S3 Standard

Amazon S3 One Zone-IA => Amazon S3 Standard-IA

Following are the unsupported life cycle transitions for S3 storage classes - Any storage
class to the Amazon S3 Standard storage class. Any storage class to the Reduced
Redundancy storage class. The Amazon S3 Intelligent-Tiering storage class to the Amazon
S3 Standard-IA storage class. The Amazon S3 One Zone-IA storage class to the Amazon
S3 Standard-IA or Amazon S3 Intelligent-Tiering storage classes.
57. An audit department generates and accesses the audit reports only *0/1
twice in a financial year. The department uses AWS Step Functions to
orchestrate the report creating process that has failover and retry
scenarios built into the solution. The underlying data to create these audit
reports is stored on Amazon S3, runs into hundreds of Terabytes and
should be available with millisecond latency.

As an AWS Certified Solutions Architect – Associate, which is the MOST


cost-effective storage class that you would recommend to be used for
this use-case?

Amazon S3 Glacier Deep Archive

Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)

Amazon S3 Standard

Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

Correct answer

Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

Feedback

Amazon S3 Standard - Amazon S3 Standard offers high durability, availability, and


performance object storage for frequently accessed data. As described above, Amazon S3
Standard-IA storage is a better fit than Amazon S3 Standard, hence using S3 standard is
ruled out for the given use-case.

Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) - For a small monthly object


monitoring and automation charge, Amazon S3 Intelligent-Tiering monitors access
patterns and automatically moves objects that have not been accessed to lower-cost
access tiers. The Amazon S3 Intelligent-Tiering storage class is designed to optimize
costs by automatically moving data to the most cost-effective access tier, without
performance impact or operational overhead. S3 Standard-IA matches the high durability,
high throughput, and low latency of S3 Intelligent-Tiering, with a low per GB storage price
and per GB retrieval fee. Moreover, Standard-IA has the same availability as that of
Amazon S3 Intelligent-Tiering. So, it's cost-efficient to use S3 Standard-IA instead of S3
Intelligent-Tiering.

Amazon S3 Glacier Deep Archive - Amazon S3 Glacier Deep Archive is a secure, durable,
and low-cost storage class for data archiving. Amazon S3 Glacier Deep Archive does not
support millisecond latency, so this option is ruled out.
58. A US-based healthcare startup is building an interactive diagnostic *1/1
tool for COVID-19 related assessments. The users would be required to
capture their personal health records via this tool. As this is sensitive
health information, the backup of the user data must be kept encrypted in
Amazon Simple Storage Service (Amazon S3). The startup does not want
to provide its own encryption keys but still wants to maintain an audit trail
of when an encryption key was used and by whom.

Which of the following is the BEST solution for this use-case?

Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the
user data on Amazon S3

Use server-side encryption with AWS Key Management Service keys (SSE-KMS)
to encrypt the user data on Amazon S3

Use server-side encryption with customer-provided keys (SSE-C) to encrypt the user
data on Amazon S3

Use client-side encryption with client provided keys and then upload the encrypted
user data to Amazon S3

Feedback

Use server-side encryption with AWS Key Management Service keys (SSE-KMS) to encrypt
the user data on Amazon S3

AWS Key Management Service (AWS KMS) is a service that combines secure, highly
available hardware and software to provide a key management system scaled for the
cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify a
customer-managed CMK that you have already created. SSE-KMS provides you with an
audit trail that shows when your CMK was used and by whom. Therefore SSE-KMS is the
correct solution for this use-case.
59. A leading carmaker would like to build a new car-as-a-sensor service *1/1
by leveraging fully serverless components that are provisioned and
managed automatically by AWS. The development team at the carmaker
does not want an option that requires the capacity to be manually
provisioned, as it does not want to respond manually to changing
volumes of sensor data.

Given these constraints, which of the following solutions is the BEST fit to
develop this car-as-a-sensor service?

Ingest the sensor data in Amazon Kinesis Data Streams, which is polled by an
application running on an Amazon EC2 instance and the data is written into an
auto-scaled Amazon DynamoDB table for downstream processing

Ingest the sensor data in Amazon Kinesis Data Firehose, which directly writes the
data into an auto-scaled Amazon DynamoDB table for downstream processing

Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS)
standard queue, which is polled by an AWS Lambda function in batches and the
data is written into an auto-scaled Amazon DynamoDB table for downstream
processing

Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS)
standard queue, which is polled by an application running on an Amazon EC2
instance and the data is written into an auto-scaled Amazon DynamoDB table for
downstream processing

Feedback

Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard
queue, which is polled by an AWS Lambda function in batches and the data is written into
an auto-scaled Amazon DynamoDB table for downstream processing

AWS Lambda lets you run code without provisioning or managing servers. You pay only for
the compute time you consume. Amazon Simple Queue Service (SQS) is a fully managed
message queuing service that enables you to decouple and scale microservices,
distributed systems, and serverless applications. SQS offers two types of message
queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-
once delivery. SQS FIFO queues are designed to guarantee that messages are processed
exactly once, in the exact order that they are sent.

AWS manages all ongoing operations and underlying infrastructure needed to provide a
highly available and scalable message queuing service. With SQS, there is no upfront cost,
no need to acquire, install, and configure messaging software, and no time-consuming
build-out and maintenance of supporting infrastructure. SQS queues are dynamically
created and scale automatically so you can build and grow applications quickly and
efficiently.
As there is no need to manually provision the capacity, so this is the correct option.
60. A company manages a multi-tier social media application that runs *1/1
on Amazon Elastic Compute Cloud (Amazon EC2) instances behind an
Application Load Balancer. The instances run in an Amazon EC2 Auto
Scaling group across multiple Availability Zones (AZs) and use an
Amazon Aurora database. As an AWS Certified Solutions Architect –
Associate, you have been tasked to make the application more resilient to
periodic spikes in request rates.

Which of the following solutions would you recommend for the given use-
case? (Select two)

Use AWS Global Accelerator

Use AWS Direct Connect

Use Amazon Aurora Replica

Use AWS Shield

Use Amazon CloudFront distribution in front of the Application Load Balancer

Feedback

You can use Amazon Aurora replicas and Amazon CloudFront distribution to make the
application more resilient to spikes in request rates.

Use Amazon Aurora Replica

Amazon Aurora Replicas have two main purposes. You can issue queries to them to scale
the read operations for your application. You typically do so by connecting to the reader
endpoint of the cluster. That way, Aurora can spread the load for read-only connections
across as many Aurora Replicas as you have in the cluster. Amazon Aurora Replicas also
help to increase availability. If the writer instance in a cluster becomes unavailable, Aurora
automatically promotes one of the reader instances to take its place as the new writer. Up
to 15 Aurora Replicas can be distributed across the Availability Zones (AZs) that a DB
cluster spans within an AWS Region.

Use Amazon CloudFront distribution in front of the Application Load Balancer

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds, all within a developer-friendly environment. CloudFront points of presence (POPs)
(edge locations) make sure that popular content can be served quickly to your viewers.
Amazon CloudFront also has regional edge caches that bring more of your content closer
to your viewers, even when the content is not popular enough to stay at a POP, to help
improve performance for that content.

Amazon CloudFront offers an origin failover feature to help support your data resiliency
needs. Amazon CloudFront is a global service that delivers your content through a
worldwide network of data centers called edge locations or points of presence (POPs). If
your content is not already cached in an edge location, Amazon CloudFront retrieves it
from an origin that you've identified as the source for the definitive version of the content.
61. The product team at a startup has figured out a market need to *1/1
support both stateful and stateless client-server communications via the
application programming interface (APIs) developed using its platform.
You have been hired by the startup as a solutions architect to build a
solution to fulfill this market need using Amazon API Gateway.

Which of the following would you identify as correct?

Amazon API Gateway creates RESTful APIs that enable stateful client-server
communication and Amazon API Gateway also creates WebSocket APIs that
adhere to the WebSocket protocol, which enables stateless, full-duplex
communication between client and server

Amazon API Gateway creates RESTful APIs that enable stateless client-server
communication and Amazon API Gateway also creates WebSocket APIs that
adhere to the WebSocket protocol, which enables stateless, full-duplex
communication between client and server

Amazon API Gateway creates RESTful APIs that enable stateful client-server
communication and Amazon API Gateway also creates WebSocket APIs that
adhere to the WebSocket protocol, which enables stateful, full-duplex
communication between client and server

Amazon API Gateway creates RESTful APIs that enable stateless client-server
communication and Amazon API Gateway also creates WebSocket APIs that
adhere to the WebSocket protocol, which enables stateful, full-duplex
communication between client and server

Feedback

Amazon API Gateway creates RESTful APIs that enable stateless client-server
communication and Amazon API Gateway also creates WebSocket APIs that adhere to the
WebSocket protocol, which enables stateful, full-duplex communication between client
and server

Amazon API Gateway is a fully managed service that makes it easy for developers to
create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the front door
for applications to access data, business logic, or functionality from your backend
services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that
enable real-time two-way communication applications.

via - https://aws.amazon.com/api-gateway/
Amazon API Gateway creates RESTful APIs that:

Are HTTP-based.

Enable stateless client-server communication.


Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE.

Amazon API Gateway creates WebSocket APIs that:

Adhere to the WebSocket protocol, which enables stateful, full-duplex communication


between client and server. Route incoming messages based on message content.

So Amazon API Gateway supports stateless RESTful APIs as well as stateful WebSocket
APIs. Therefore this option is correct.
62. A junior scientist working with the Deep Space Research Laboratory *0/1
at NASA is trying to upload a high-resolution image of a nebula into
Amazon S3. The image size is approximately 3 gigabytes. The junior
scientist is using Amazon S3 Transfer Acceleration (Amazon S3TA) for
faster image upload. It turns out that Amazon S3TA did not result in an
accelerated transfer.

Given this scenario, which of the following is correct regarding the


charges for this image transfer?

The junior scientist needs to pay both S3 transfer charges and S3TA transfer
charges for the image upload

The junior scientist does not need to pay any transfer charges for the image upload

The junior scientist only needs to pay S3TA transfer charges for the image
upload

The junior scientist only needs to pay Amazon S3 transfer charges for the image
upload

Correct answer

The junior scientist does not need to pay any transfer charges for the image upload

Feedback

The junior scientist only needs to pay S3TA transfer charges for the image upload - Since
S3TA did not result in an accelerated transfer, there are no S3TA transfer charges to be
paid.

The junior scientist only needs to pay Amazon S3 transfer charges for the image upload -
There are no S3 data transfer charges when data is transferred in from the internet. So this
option is incorrect.

The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges for
the image upload - There are no Amazon S3 data transfer charges when data is transferred
in from the internet. Since S3TA did not result in an accelerated transfer, there are no S3TA
transfer charges to be paid.

References:

https://aws.amazon.com/s3/transfer-acceleration/

https://aws.amazon.com/s3/pricing/
63. A company is in the process of migrating its on-premises SMB file *1/1
shares to AWS so the company can get out of the business of managing
multiple file servers across dozens of offices. The company has 200
terabytes of data in its file servers. The existing on-premises applications
and native Windows workloads should continue to have low latency
access to this data which needs to be stored on a file system service
without any disruptions after the migration. The company also wants any
new applications deployed on AWS to have access to this migrated data.

Which of the following is the best solution to meet this requirement?

Use AWS Storage Gateway’s File Gateway to provide low-latency, on-premises


access to fully managed file shares in Amazon S3. The applications deployed on
AWS can access this data directly from Amazon S3

Use Amazon FSx File Gateway to provide low-latency, on-premises access to


fully managed file shares in Amazon FSx for Windows File Server. The
applications deployed on AWS can access this data directly from Amazon FSx
in AWS

Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully
managed file shares in Amazon EFS. The applications deployed on AWS can
access this data directly from Amazon EFS

Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-premises


access to fully managed file shares in Amazon FSx for Windows File Server. The
applications deployed on AWS can access this data directly from Amazon FSx in
AWS

Feedback

Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully
managed file shares in Amazon FSx for Windows File Server. The applications deployed on
AWS can access this data directly from Amazon FSx in AWS

For user or team file shares, and file-based application migrations, Amazon FSx File
Gateway provides low-latency, on-premises access to fully managed file shares in Amazon
FSx for Windows File Server. For applications deployed on AWS, you may access your file
shares directly from Amazon FSx in AWS.

For your native Windows workloads and users, or your SMB clients, Amazon FSx for
Windows File Server provides all of the benefits of a native Windows SMB environment
that is fully managed and secured and scaled like any other AWS service. You get detailed
reporting, replication, backup, failover, and support for native Windows tools like DFS and
Active Directory.
64. A retail company has developed a REST API which is deployed in an *1/1
Auto Scaling group behind an Application Load Balancer. The REST API
stores the user data in Amazon DynamoDB and any static content, such
as images, are served via Amazon Simple Storage Service (Amazon S3).
On analyzing the usage trends, it is found that 90% of the read requests
are for commonly accessed data across all users.

As a Solutions Architect, which of the following would you suggest as the


MOST efficient solution to improve the application performance?

Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and


Amazon CloudFront for Amazon S3

Enable ElastiCache Redis for DynamoDB and Amazon CloudFront for Amazon S3

Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for Amazon
S3

Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and


ElastiCache Memcached for Amazon S3

Feedback

Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and Amazon
CloudFront for Amazon S3

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory


cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—
from milliseconds to microseconds—even at millions of requests per second.

Amazon DynamoDB Accelerator (DAX) is tightly integrated with Amazon DynamoDB—you


simply provision a DAX cluster, use the DAX client SDK to point your existing Amazon
DynamoDB API calls at the DAX cluster, and let DAX handle the rest. Because DAX is API-
compatible with Amazon DynamoDB, you don't have to make any functional application
code changes. DAX is used to natively cache Amazon DynamoDB reads.

Amazon CloudFront is a content delivery network (CDN) service that delivers static and
dynamic web content, video streams, and APIs around the world, securely and at scale. By
design, delivering data out of Amazon CloudFront can be more cost-effective than
delivering it from S3 directly to your users.

When a user requests content that you serve with CloudFront, their request is routed to a
nearby Edge Location. If CloudFront has a cached copy of the requested file, CloudFront
delivers it to the user, providing a fast (low-latency) response. If the file they’ve requested
isn’t yet cached, CloudFront retrieves it from your origin – for example, the Amazon S3
bucket where you’ve stored your content.
So, you can use Amazon CloudFront to improve application performance to serve static
content from Amazon S3.

65. As part of a pilot program, a biotechnology company wants to *1/1


integrate data files from its on-premises analytical application with AWS
Cloud via an NFS interface.

Which of the following AWS service is the MOST efficient solution for the
given use-case?

AWS Storage Gateway - File Gateway

AWS Storage Gateway - Volume Gateway

AWS Site-to-Site VPN

AWS Storage Gateway - Tape Gateway

Feedback

AWS Storage Gateway - File Gateway

AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access
to virtually unlimited cloud storage. The service provides three different types of gateways
– Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-
premises applications to cloud storage, caching data locally for low-latency access.

AWS Storage Gateway's file interface, or file gateway, offers you a seamless way to
connect to the cloud in order to store application data files and backup images as durable
objects on Amazon S3 cloud storage. File gateway offers SMB or NFS-based access to
data in Amazon S3 with local caching. As the company wants to integrate data files from
its analytical instruments into AWS via an NFS interface, therefore AWS Storage Gateway -
File Gateway is the correct answer.

File Gateway Overview:

via -
https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcep
ts.html

This form was created inside of Adex International Pvt. Ltd..


Does this form look suspicious? Report
Forms

You might also like