scs-c02 2
scs-c02 2
Get the Full SCS-C02 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/SCS-C02-exam-dumps.html (235 New Questions)
Amazon-Web-Services
Exam Questions SCS-C02
AWS Certified Security - Specialty
NEW QUESTION 1
An AWS account that is used for development projects has a VPC that contains two subnets. The first subnet is named public-subnet-1 and has the CIDR block
192.168.1.0/24 assigned. The other subnet is named private-subnet-2 and has the CIDR block 192.168.2.0/24 assigned. Each subnet contains Amazon EC2
instances.
Each subnet is currently using the VPC's default network ACL. The security groups that the EC2 instances in these subnets use have rules that allow traffic
between each instance where required. Currently, all network traffic flow is working as expected between the EC2 instances that are using these subnets.
A security engineer creates a new network ACL that is named subnet-2-NACL with default entries. The security engineer immediately configures private-subnet-2
to use the new network ACL and makes no other changes to the infrastructure. The security engineer starts to receive reports that the EC2 instances in
public-subnet-1 and public-subnet-2 cannot communicate with each other.
Which combination of steps should the security engineer take to allow the EC2 instances that are running in these two subnets to communicate again? (Select
TWO.)
A. Add an outbound allow rule for 192.168.2.0/24 in the VPC's default network ACL.
B. Add an inbound allow rule for 192.168.2.0/24 in the VPC's default network ACL.
C. Add an outbound allow rule for 192.168.2.0/24 in subnet-2-NACL.
D. Add an inbound allow rule for 192.168.1.0/24 in subnet-2-NACL.
E. Add an outbound allow rule for 192.168.1.0/24 in subnet-2-NACL.
Answer: CE
Explanation:
The AWS documentation states that you can add an outbound allow rule for 192.168.2.0/24 in
subnet-2-NACL and add an outbound allow rule for 192.168.1.0/24 in subnet-2-NACL. This will allow the EC2 instances that are running in these two subnets to
communicate again.
References: : Amazon VPC User Guide
NEW QUESTION 2
A company is using Amazon Macie, AWS Firewall Manager, Amazon Inspector, and AWS Shield Advanced in its AWS account. The company wants to receive
alerts if a DDoS attack occurs against the account.
Which solution will meet this requirement?
Answer: D
Explanation:
This answer is correct because AWS Shield Advanced is a service that provides comprehensive protection
against DDoS attacks of any size or duration. It also provides metrics and reports on the DDoS attack vectors, duration, and size. You can create an Amazon
CloudWatch alarm that monitors Shield Advanced metrics such as DDoSAttackBitsPerSecond, DDoSAttackPacketsPerSecond, and
DDoSAttackRequestsPerSecond to receive alerts if a DDoS attack occurs against your account.
For more information, see Monitoring AWS Shield Advanced with Amazon CloudWatch and AWS Shield Advanced metrics and alarms.
NEW QUESTION 3
A company has a relational database workload that runs on Amazon Aurora MySQL. According to new compliance standards the company must rotate all
database credentials every 30 days. The company needs a solution that maximizes security and minimizes development effort.
Which solution will meet these requirements?
Answer: A
Explanation:
To rotate database credentials every 30 days, the most secure and efficient solution is to store the database credentials in AWS Secrets Manager and configure
automatic credential rotation for every 30 days. Secrets Manager can handle the rotation of the credentials in both the secret and the database, and it can use
AWS KMS to encrypt the credentials. Option B is incorrect because it requires creating a custom Lambda function to rotate the credentials, which is more effort
than using Secrets Manager. Option C is incorrect because it stores the database credentials in an environment file or a configuration file, which is less secure
than using Secrets Manager. Option D is incorrect because it combines the drawbacks of option B and option C. Verified References:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-other.html
NEW QUESTION 4
A company stores sensitive documents in Amazon S3 by using server-side encryption with an IAM Key Management Service (IAM KMS) CMK. A new requirement
mandates that the CMK that is used for these documents can be used only for S3 actions.
Which statement should the company add to the key policy to meet this requirement?
A)
B)
A. Option A
B. Option B
Answer: A
NEW QUESTION 5
A company has developed a new Amazon RDS database application. The company must secure the ROS database credentials for encryption in transit and
encryption at rest. The company also must rotate the credentials automatically on a regular basis.
Which solution meets these requirements?
A. Use IAM Systems Manager Parameter Store to store the database credentiai
B. Configure automatic rotation of the credentials.
C. Use IAM Secrets Manager to store the database credential
D. Configure automat* rotation of the credentials
E. Store the database credentials in an Amazon S3 bucket that is configured with server-side encryption with S3 managed encryption keys (SSE-S3) Rotate the
credentials with IAM database authentication.
F. Store the database credentials m Amazon S3 Glacier, and use S3 Glacier Vault Lock Configure an IAM Lambda function to rotate the credentials on a
scheduled basts
Answer: A
NEW QUESTION 6
An AWS account administrator created an IAM group and applied the following managed policy to require that each individual user authenticate using multi-factor
authentication:
After implementing the policy, the administrator receives reports that users are unable to perform Amazon EC2 commands using the AWS CLI.
What should the administrator do to resolve this problem while still enforcing multi-factor authentication?
B. Instruct users to run the aws sts get-session-token CLI command and pass the multi-factor authentication--serial-number and --token-code parameter
C. Use these resulting values to make API/CLI calls.
D. Implement federated API/CLI access using SAML 2.0, then configure the identity provider to enforce multi-factor authentication.
E. Create a role and enforce multi-factor authentication in the role trust polic
F. Instruct users to run the sts assume-role CLI command and pass --serial-number and --token-code parameter
G. Store the resultingvalues in environment variable
H. Add sts:AssumeRole to NotAction in the policy.
Answer: B
Explanation:
The correct answer is B. Instruct users to run the aws sts get-session-token CLI command and pass the multi-factor authentication --serial-number and --token-
code parameters. Use these resulting values to make API/CLI calls.
According to the AWS documentation1, the aws sts get-session-token CLI command returns a set of temporary credentials for an AWS account or IAM user. The
credentials consist of an access key ID, a secret access key, and a security token. These credentials are valid for the specified duration only. The session duration
for IAM users can be between 15 minutes and 36 hours, with a default of 12 hours.
You can use the --serial-number and --token-code parameters to provide the MFA device serial number and the MFA code from the device. The MFA device must
be associated with the user who is making the
get-session-token call. If you do not provide these parameters when your IAM user or role has a policy that requires MFA, you will receive an Access Denied error.
The temporary security credentials that are returned by the get-session-token command can then be used to make subsequent API or CLI calls that require MFA
authentication. You can use environment variables or a profile in your AWS CLI configuration file to specify the temporary credentials.
Therefore, this solution will resolve the problem of users being unable to perform EC2 commands using the AWS CLI, while still enforcing MFA.
The other options are incorrect because:
A. Changing the value of aws:MultiFactorAuthPresent to true will not work, because this is a condition key that is evaluated by AWS when a request is made.
You cannot set this value manually in your policy or request. You must provide valid MFA information to AWS for this condition key to be true.
C. Implementing federated API/CLI access using SAML 2.0 may work, but it requires more operational effort than using the get-session-token command. You
would need to configure a SAML identity provider and trust relationship with AWS, and use a custom SAML client to request temporary credentials from AWS STS.
This solution may also introduce additional security risks if the identity provider is compromised.
D. Creating a role and enforcing MFA in the role trust policy may work, but it also requires more operational effort than using the get-session-token command.
You would need to create a role for each user or group that needs to perform EC2 commands, and specify a trust policy that requires MFA. You would also need
to grant the users permission to assume the role, and instruct them to use the sts assume-role command instead of the get-session-token command.
References:
1: get-session-token — AWS CLI Command Reference
NEW QUESTION 7
A company is implementing new compliance requirements to meet customer needs. According to the new requirements the company must not use any Amazon
RDS DB instances or DB clusters that lack encryption of the underlying storage. The company needs a solution that will generate an email alert when an
unencrypted DB instance or DB cluster is created. The solution also must terminate the unencrypted DB instance or DB cluster.
Which solution will meet these requirements in the MOST operationally efficient manner?
Answer: A
Explanation:
https://docs.aws.amazon.com/config/latest/developerguide/rds-storage-encrypted.html
NEW QUESTION 8
A company uses an Amazon S3 bucket to store reports Management has mandated that all new objects stored in this bucket must be encrypted at rest using
server-side encryption with a client-specified IAM Key Management Service (IAM KMS) CMK owned by the same account as the S3 bucket. The IAM account
number is 111122223333, and the bucket name Is report bucket. The company's security specialist must write the S3 bucket policy to ensure the mandate can be
Implemented
Which statement should the security specialist include in the policy?
A.
B.
C.
D.
E. Option A
F. Option B
G. Option C
H. Option D
Answer: D
NEW QUESTION 9
A company is running workloads in a single IAM account on Amazon EC2 instances and Amazon EMR clusters a recent security audit revealed that multiple
Amazon Elastic Block Store (Amazon EBS) volumes and snapshots are not encrypted
The company's security engineer is working on a solution that will allow users to deploy EC2 Instances and EMR clusters while ensuring that all new EBS volumes
and EBS snapshots are encrypted at rest. The solution must also minimize operational overhead
Which steps should the security engineer take to meet these requirements?
A. Create an Amazon Event Bridge (Amazon Cloud watch Events) event with an EC2 instance as the source and create volume as the event trigge
B. When the event is triggered invoke an IAM Lambda function to evaluate and notify the security engineer if the EBS volume that was created is not encrypted.
C. Use a customer managed IAM policy that will verify that the encryption ag of the Createvolume context is set to tru
D. Apply this rule to all users.
E. Create an IAM Config rule to evaluate the conguration of each EC2 instance on creation or modication.Have the IAM Cong rule trigger an IAM Lambdafunction
to alert the security team and terminate the instance it the EBS volume is not encrypte
F. 5
G. Use the IAM Management Console or IAM CLi to enable encryption by default for EBS volumes in each IAM Region where the company operates.
Answer: D
Explanation:
To ensure that all new EBS volumes and EBS snapshots are encrypted at rest and minimize operational overhead, the security engineer should do the following:
Use the AWS Management Console or AWS CLI to enable encryption by default for EBS volumes in each AWS Region where the company operates. This
allows the security engineer to automatically encrypt any new EBS volumes and snapshots created from those volumes, without requiring any additional actions
from users.
NEW QUESTION 10
A development team is attempting to encrypt and decode a secure string parameter from the IAM Systems Manager Parameter Store using an IAM Key
Management Service (IAM KMS) CMK. However, each attempt results in an error message being sent to the development team.
Which CMK-related problems possibly account for the error? (Select two.)
Answer: AD
Explanation:
https://docs.IAM.amazon.com/kms/latest/developerguide/services-parameter-store.html#parameter-store-cmk-fa
NEW QUESTION 10
A company's Chief Security Officer has requested that a Security Analyst review and improve the security posture of each company IAM account The Security
Analyst decides to do this by Improving IAM account root user security.
Which actions should the Security Analyst take to meet these requirements? (Select THREE.)
A. Delete the access keys for the account root user in every account.
B. Create an admin IAM user with administrative privileges and delete the account root user in every account.
C. Implement a strong password to help protect account-level access to the IAM Management Console by the account root user.
D. Enable multi-factor authentication (MFA) on every account root user in all accounts.
E. Create a custom IAM policy to limit permissions to required actions for the account root user and attach the policy to the account root user.
F. Attach an IAM role to the account root user to make use of the automated credential rotation in IAM STS.
Answer: ADE
Explanation:
because these are the actions that can improve IAM account root user security. IAM account root user is a user that has complete access to all AWS resources
and services in an account. IAM account root user security is a set of best practices that help protect the account root user from unauthorized or accidental use.
Deleting the access keys for the account root user in every account can help prevent programmatic access by the account root user, which reduces the risk of
compromise or misuse. Enabling MFA on every account root user in all accounts can help add an extra layer of security for console access by requiring a
verification code in addition to a password. Creating a custom IAM policy to limit permissions to required actions for the account root user and attaching the policy
to the account root user can help enforce the principle of least privilege and restrict the account root user from performing unnecessary or dangerous actions. The
other options are either invalid or ineffective for improving IAM account root user security.
NEW QUESTION 14
A security team is developing an application on an Amazon EC2 instance to get objects from an Amazon S3 bucket. All objects in the S3 bucket are encrypted with
an AWS Key Management Service (AWS KMS) customer managed key. All network traffic for requests that are made within the VPC is restricted to the AWS
infrastructure. This traffic does not traverse the public internet.
The security team is unable to get objects from the S3 bucket Which factors could cause this issue? (Select THREE.)
A. The IAM instance profile that is attached to the EC2 instance does not allow the s3 ListBucket action to the S3: bucket in the AWS accounts.
B. The I AM instance profile that is attached to the EC2 instance does not allow the s3 ListParts action to the S3; bucket in the AWS accounts.
C. The KMS key policy that encrypts the object in the S3 bucket does not allow the kms; ListKeys action to the EC2 instance profile ARN.
D. The KMS key policy that encrypts the object in the S3 bucket does not allow the kms Decrypt action to the EC2 instance profile ARN.
E. The security group that is attached to the EC2 instance is missing an outbound rule to the S3 managed prefix list over port 443.
F. The security group that is attached to the EC2 instance is missing an inbound rule from the S3 managed prefix list over port 443.
Answer: ADE
Explanation:
https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html
To get objects from an S3 bucket that are encrypted with a KMS customer managed key, the security team needs to have the following factors in place:
The IAM instance profile that is attached to the EC2 instance must allow the s3:GetObject action to the S3 bucket or object in the AWS account. This
permission is required to read the object from S3. Option A is incorrect because it specifies the s3:ListBucket action, which is only required to list the objects in the
bucket, not to get them.
The KMS key policy that encrypts the object in the S3 bucket must allow the kms:Decrypt action to the EC2 instance profile ARN. This permission is required to
decrypt the object using the KMS key. Option D is correct.
The security group that is attached to the EC2 instance must have an outbound rule to the S3 managed prefix list over port 443. This rule is required to allow
HTTPS traffic from the EC2 instance to S3 within the AWS infrastructure. Option E is correct. Option B is incorrect because it specifies the s3:ListParts action,
which is only required for multipart uploads, not for getting objects. Option C is incorrect
because it specifies the kms:ListKeys action, which is not required for getting objects. Option F is incorrect because it specifies an inbound rule from the S3
managed prefix list, which is not required for getting objects. Verified References:
https://docs.aws.amazon.com/kms/latest/developerguide/control-access.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html
NEW QUESTION 19
An Incident Response team is investigating an IAM access key leak that resulted in Amazon EC2 instances being launched. The company did not discover the
incident until many months later The Director of Information Security wants to implement new controls that will alert when similar incidents happen in the future
Which controls should the company implement to achieve this? {Select TWO.)
A. Enable VPC Flow Logs in all VPCs Create a scheduled IAM Lambda function that downloads and parses the logs, and sends an Amazon SNS notification for
violations.
B. Use IAM CloudTrail to make a trail, and apply it to all Regions Specify an Amazon S3 bucket to receive all the CloudTrail log files
C. Add the following bucket policy to the company's IAM CloudTrail bucket to prevent log tampering{"Version": "2012-10-17-,"Statement": { "Effect":
"Deny","Action": "s3:PutObject", "Principal": "-","Resource": "arn:IAM:s3:::cloudtrail/IAMLogs/111122223333/*"}}Create an Amazon S3 data event for an PutObject
attempts, which sends notifications to an Amazon SNS topic.
D. Create a Security Auditor role with permissions to access Amazon CloudWatch Logs m all Regions Ship the logs to an Amazon S3 bucket and make a lifecycle
policy to ship the logs to Amazon S3 Glacier.
E. Verify that Amazon GuardDuty is enabled in all Regions, and create an Amazon CloudWatch Events rule for Amazon GuardDuty findings Add an Amazon SNS
topic as the rule's target
Answer: AE
NEW QUESTION 24
A company is attempting to conduct forensic analysis on an Amazon EC2 instance, but the company is unable to connect to the instance by using AWS Systems
Manager Session Manager. The company has installed AWS Systems Manager Agent (SSM Agent) on the EC2 instance.
The EC2 instance is in a subnet in a VPC that does not have an internet gateway attached. The company has associated a security group with the EC2 instance.
The security group does not have inbound or outbound rules. The subnet's network ACL allows all inbound and outbound traffic.
Which combination of actions will allow the company to conduct forensic analysis on the EC2 instance without compromising forensic data? (Select THREE.)
A. Update the EC2 instance security group to add a rule that allows outbound traffic on port 443 for 0.0.0.0/0.
B. Update the EC2 instance security group to add a rule that allows inbound traffic on port 443 to the VPC's CIDR range.
C. Create an EC2 key pai
D. Associate the key pair with the EC2 instance.
E. Create a VPC interface endpoint for Systems Manager in the VPC where the EC2 instance is located.
F. Attach a security group to the VPC interface endpoin
G. Allow inbound traffic on port 443 to the VPC's CIDR range.
H. Create a VPC interface endpoint for the EC2 instance in the VPC where the EC2 instance is located.
Answer: BCF
NEW QUESTION 28
A security engineer recently rotated all IAM access keys in an AWS account. The security engineer then configured AWS Config and enabled the following AWS
Config managed rules; mfa-enabled-for-iam-console-access, iam-user-mfa-enabled, access-key-rotated, and iam-user-unused-credentials-check.
The security engineer notices that all resources are displaying as noncompliant after the IAM GenerateCredentialReport API operation is invoked.
What could be the reason for the noncompliant status?
A. The IAM credential report was generated within the past 4 hours.
B. The security engineer does not have the GenerateCredentialReport permission.
C. The security engineer does not have the GetCredentialReport permission.
D. The AWS Config rules have a MaximumExecutionFrequency value of 24 hours.
Answer: D
Explanation:
The correct answer is D. The AWS Config rules have a MaximumExecutionFrequency value of 24 hours. According to the AWS documentation1, the
MaximumExecutionFrequency parameter specifies the maximum frequency with which AWS Config runs evaluations for a rule. For AWS Config managed rules,
this value can be one of the following:
One_Hour
Three_Hours
Six_Hours
Twelve_Hours
TwentyFour_Hours
If the rule is triggered by configuration changes, it will still run evaluations when AWS Config delivers the configuration snapshot. However, if the rule is triggered
periodically, it will not run evaluations more often than the specified frequency.
In this case, the security engineer enabled four AWS Config managed rules that are triggered periodically. Therefore, these rules will only run evaluations every 24
hours, regardless of when the IAM credential report is generated. This means that the resources will display as noncompliant until the next evaluation cycle, which
could take up to 24 hours after the IAM access keys are rotated.
The other options are incorrect because:
A. The IAM credential report can be generated at any time, but it will not affect the compliance status of the resources until the next evaluation cycle of the
AWS Config rules.
B. The security engineer was able to invoke the IAM GenerateCredentialReport API operation, which means they have the GenerateCredentialReport
permission. This permission is required to generate a credential report that lists all IAM users in an AWS account and their credential status2.
C. The security engineer does not need the GetCredentialReport permission to enable or evaluate AWS Config rules. This permission is required to retrieve a
credential report that was previously generated by using the GenerateCredentialReport operation2.
References:
1: AWS::Config::ConfigRule - AWS CloudFormation 2: IAM: Generate and retrieve IAM credential reports
NEW QUESTION 31
A company uses AWS Organizations to manage a multi-accountAWS environment in a single AWS Region. The organization's management account is named
management-01. The company has turned on AWS Config in all accounts in the organization. The company has designated an account named security-01 as the
delegated administra-tor for AWS Config.
All accounts report the compliance status of each account's rules to the AWS Config delegated administrator account by using an AWS Config aggregator. Each
account administrator can configure and manage the account's own AWS Config rules to handle each account's unique compliance requirements.
A security engineer needs to implement a solution to automatically deploy a set of 10 AWS Config rules to all existing and future AWS accounts in the organiza-
tion. The solution must turn on AWS Config automatically during account crea-tion.
Which combination of steps will meet these requirements? (Select TWO.)
A. Create an AWS CloudFormation template that contains the 1 0 required AVVS Config rule
B. Deploy the template by using CloudFormation StackSets in the security-01 account.
C. Create a conformance pack that contains the 10 required AWS Config rule
D. Deploy the conformance pack from the security-01 account.
E. Create a conformance pack that contains the 10 required AWS Config rule
F. Deploy the conformance pack from the management-01 account.
G. Create an AWS CloudFormation template that will activate AWS Confi
H. De-ploy the template by using CloudFormation StackSets in the security-01 ac-count.
I. Create an AWS CloudFormation template that will activate AWS Confi
J. De-ploy the template by using CloudFormation StackSets in the management-01 account.
Answer: BE
NEW QUESTION 36
A company uses AWS Organizations to manage a small number of AWS accounts. However, the company plans to add 1 000 more accounts soon. The company
allows only a centralized security team to create IAM roles for all AWS accounts and teams. Application teams submit requests for IAM roles to the security team.
The security team has a backlog of IAM role requests and cannot review and provision the IAM roles quickly.
The security team must create a process that will allow application teams to provision their own IAM roles. The process must also limit the scope of IAM roles and
prevent privilege escalation.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: D
Explanation:
To create a process that will allow application teams to provision their own IAM roles, while limiting the scope of IAM roles and preventing privilege escalation, the
following steps are required:
Create a service control policy (SCP) that defines the maximum permissions that can be granted to any IAM role in the organization. An SCP is a type of policy
that you can use with AWS Organizations to manage permissions for all accounts in your organization. SCPs restrict permissions for entities in member accounts,
including each AWS account root user, IAM users, and roles. For more information, see Service control policies overview.
Create a permissions boundary for IAM roles that matches the SCP. A permissions boundary is an advanced feature for using a managed policy to set the
maximum permissions that an identity-based policy can grant to an IAM entity. A permissions boundary allows an entity to perform only the actions
that are allowed by both its identity-based policies and its permissions boundaries. For more information, see Permissions boundaries for IAM entities.
Add the SCP to the root organizational unit (OU) so that it applies to all accounts in the organization.
This will ensure that no IAM role can exceed the permissions defined by the SCP, regardless of how it is created or modified.
Instruct the application teams to attach the permissions boundary to any IAM role they create. This will prevent them from creating IAM roles that can escalate
their own privileges or access resources they are not authorized to access.
This solution will meet the requirements with the least operational overhead, as it leverages AWS Organizations and IAM features to delegate and limit IAM role
creation without requiring manual reviews or approvals.
The other options are incorrect because they either do not allow application teams to provision their own IAM roles (A), do not limit the scope of IAM roles or
prevent privilege escalation (B), or do not take advantage of managed services whenever possible ©.
Verified References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html
NEW QUESTION 40
A company used a lift-and-shift approach to migrate from its on-premises data centers to the AWS Cloud. The company migrated on-premises VMS to Amazon
EC2 in-stances. Now the company wants to replace some of components that are running on the EC2 instances with managed AWS services that provide similar
functionality.
Initially, the company will transition from load balancer software that runs on EC2 instances to AWS Elastic Load Balancers. A security engineer must ensure that
after this transition, all the load balancer logs are centralized and searchable for auditing. The security engineer must also ensure that metrics are generated to
show which ciphers are in use.
Which solution will meet these requirements?
Answer: C
Explanation:
Amazon S3 is a service that provides scalable, durable, and secure object storage. You can use Amazon S3 to store and retrieve any amount of data from
anywhere on the web1
AWS Elastic Load Balancing is a service that distributes incoming application or network traffic across multiple targets, such as EC2 instances, containers, or
IP addresses. You can use Elastic Load Balancing to increase the availability and fault tolerance of your applications2
Elastic Load Balancing supports access logging, which captures detailed information about requests sent to your load balancer. Each log contains information
such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use access logs to analyze traffic
patterns and troubleshoot issues3
You can configure your load balancer to store access logs in an Amazon S3 bucket that you specify.
You can also specify the interval for publishing the logs, which can be 5 or 60 minutes. The logs are stored in a hierarchical folder structure by load balancer name,
IP address, year, month, day, and time.
Amazon Athena is a service that allows you to analyze data in Amazon S3 using standard SQL. You can use Athena to run ad-hoc queries and get results in
seconds. Athena is serverless, so there is no infrastructure to manage and you pay only for the queries that you run.
You can use Athena to search the access logs that are stored in your S3 bucket. You can create a table in Athena that maps to your S3 bucket and then run
SQL queries on the table. You can also use the Athena console or API to view and download the query results.
You can also use Athena to create queries for the required metrics, such as the number of requests per cipher or protocol. You can then publish the metrics to
Amazon CloudWatch, which is a service that monitors and manages your AWS resources and applications. You can use CloudWatch to collect and track metrics,
create alarms, and automate actions based on the state of your resources.
By using this solution, you can meet the requirements of ensuring that all the load balancer logs are centralized and searchable for auditing and that metrics
are generated to show which ciphers are in use.
NEW QUESTION 43
A security engineer needs to implement a write-once-read-many (WORM) model for data that a company will store in Amazon S3 buckets. The company uses the
S3 Standard storage class for all of its S3 buckets. The security engineer must en-sure that objects cannot be overwritten or deleted by any user, including the
AWS account root user.
Which solution will meet these requirements?
Answer: A
NEW QUESTION 47
Your company is planning on using bastion hosts for administering the servers in IAM. Which of the following is the best description of a bastion host from a
security perspective?
Please select:
A. A Bastion host should be on a private subnet and never a public subnet due to security concerns
B. A Bastion host sits on the outside of an internal network and is used as a gateway into the private network and is considered the critical strong point of the
network
C. Bastion hosts allow users to log in using RDP or SSH and use that session to S5H into internal network to access private subnet resources.
D. A Bastion host should maintain extremely tight security and monitoring as it is available to the public
Answer: C
Explanation:
A bastion host is a special purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single
application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer.
In IAM, A bastion host is kept on a public subnet. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private
subnets.
Options A and B are invalid because the bastion host needs to sit on the public network. Option D is invalid because bastion hosts are not used for monitoring For
more information on bastion hosts, just browse to the below URL:
https://docsIAM.amazon.com/quickstart/latest/linux-bastion/architecture.htl
The correct answer is: Bastion hosts allow users to log in using RDP or SSH and use that session to SSH into internal network to access private subnet resources.
Submit your Feedback/Queries to our Experts
NEW QUESTION 52
To meet regulatory requirements, a Security Engineer needs to implement an IAM policy that restricts the use of AWS services to the us-east-1 Region.
What policy should the Engineer implement?
A.
Answer: C
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.h
NEW QUESTION 54
A security engineer is troubleshooting an AWS Lambda function that is named MyLambdaFunction. The function is encountering an error when the function
attempts to read the objects in an Amazon S3 bucket that is named DOC-EXAMPLE-BUCKET. The S3 bucket has the following bucket policy:
Which change should the security engineer make to the policy to ensure that the Lambda function can read the bucket objects?
Answer: C
Explanation:
The correct answer is C. Change the Resource element to “arn:aws:s3:::DOC-EXAMPLE-BUCKET/*”.
The reason is that the Resource element in the bucket policy specifies which objects in the bucket are affected by the policy. In this case, the policy only applies to
the bucket itself, not the objects inside it. Therefore, the Lambda function cannot access the objects with the s3:GetObject permission. To fix this, the Resource
element should include a wildcard (*) to match all objects in the bucket. This way, the policy grants the Lambda function permission to read any object in the
bucket.
The other options are incorrect for the following reasons:
A. Removing the Condition element would not help, because it only restricts access based on the source IP address of the request. The Principal element
should not be changed to the Lambda function ARN, because it specifies who is allowed or denied access by the policy. The policy should allow access to any
principal ("*") and rely on IAM roles or policies to control access to the Lambda function.
B. Changing the Action element to include s3:GetBucket* would not help, because it would grant additional permissions that are not needed by the Lambda
function, such as s3:GetBucketAcl or s3:GetBucketPolicy. The s3:GetObject* permission is sufficient for reading objects in the bucket.
D. Changing the Resource element to the Lambda function ARN would not make sense, because it would mean that the policy applies to the Lambda function
itself, not the bucket or its objects. The Principal element should not be changed to s3.amazonaws.com, because it would grant access to any AWS service that
uses S3, not just Lambda.
NEW QUESTION 56
A developer is building a serverless application hosted on AWS that uses Amazon Redshift as a data store The application has separate modules for readwrite
and read-only functionality The modules need their own database users for compliance reasons
Which combination of steps should a security engineer implement to grant appropriate access? (Select TWO.)
A. Configure cluster security groups for each application module to control access to database users that are required for read-only and readwrite
B. Configure a VPC endpoint for Amazon Redshift Configure an endpoint policy that maps database users to each application module, and allow access to the
tables that are required for read-only and read/write
C. Configure an 1AM policy for each module Specify the ARN of an Amazon Redshift database user that allows the GetClusterCredentials API call
D. Create local database users for each module
E. Configure an 1AM policy for each module Specify the ARN of an 1AM user that allows the GetClusterCredentials API call
Answer: A
Explanation:
To grant appropriate access to separate modules for read-write and read-only functionality in a serverless
application hosted on AWS that uses Amazon Redshift as a data store, a security engineer should configure cluster security groups for each application module to
control access to database users that are required for read-only and readwrite, and configure an IAM policy for each module specifying the ARN of an IAM user
that allows the GetClusterCredentials API call.
References: : Amazon Redshift - Amazon Web Services : Amazon Redshift - Amazon Web Services : Identity and Access Management - AWS Management
Console : AWS Identity and Access Management - AWS Management Console
NEW QUESTION 58
A company has a single AWS account and uses an Amazon EC2 instance to test application code. The company recently discovered that the instance was
compromised. The instance was serving up malware. The analysis of the instance showed that the instance was compromised 35 days ago.
A security engineer must implement a continuous monitoring solution that automatically notifies the company’s security team about compromised instances
through an email distribution list for high severity findings. The security engineer must implement the solution as soon as possible.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)
Answer: BCE
NEW QUESTION 63
A company has multiple departments. Each department has its own IAM account. All these accounts belong to the same organization in IAM Organizations.
A large .csv file is stored in an Amazon S3 bucket in the sales department's IAM account. The company wants to allow users from the other accounts to access the
.csv file's content through the combination of IAM Glue and Amazon Athena. However, the company does not want to allow users from the other accounts to
access other files in the same folder.
Which solution will meet these requirements?
A. Apply a user policy in the other accounts to allow IAM Glue and Athena lo access the .csv We.
B. Use S3 Select to restrict access to the .csv li
C. In IAM Glue Data Catalog, use S3 Select as the source of the IAM Glue database.
D. Define an IAM Glue Data Catalog resource policy in IAM Glue to grant cross-account S3 object access to the .csv file.
E. Grant IAM Glue access to Amazon S3 in a resource-based policy that specifies the organization as the principal.
Answer: A
NEW QUESTION 66
Your CTO thinks your IAM account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your
hackers are very sophisticated IAM engineers and doing everything they can to cover their tracks?
Please select:
Answer: A
Explanation:
The IAM Documentation mentions the following
To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it you can use CloudTrail log file integrity validation. This feature is
built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete
or forge CloudTrail log files without detection. You can use the IAM CLI to validate the files in the location where CloudTrail delivered them
Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has
not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log file has
been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time.
Options B.C and D is invalid because you need to check for log File Integrity Validation for cloudtrail logs For more information on Cloudtrail log file validation,
please visit the below URL: http://docs.IAM.amazon.com/IAMcloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html
The correct answer is: Use CloudTrail Log File Integrity Validation. omit your Feedback/Queries to our Expert
NEW QUESTION 71
A startup company is using a single AWS account that has resources in a single AWS Region. A security engineer configures an AWS Cloud Trail trail in the same
Region to deliver log files to an Amazon S3 bucket by using the AWS CLI.
Because of expansion, the company adds resources in multiple Regions. The secu-rity engineer notices that the logs from the new Regions are not reaching the
S3 bucket.
What should the security engineer do to fix this issue with the LEAST amount of operational overhead?
Answer: D
Explanation:
The correct answer is D. Change the existing CloudTrail trail so that it applies to all Regions.
According to the AWS documentation1, you can configure CloudTrail to deliver log files from multiple Regions to a single S3 bucket for a single account. To
change an existing single-Region trail to log in all Regions, you must use the AWS CLI and add the --is-multi-region-trail option to the update-trail command2. This
will ensure that you log global service events and capture all management event activity in your account.
Option A is incorrect because creating a new CloudTrail trail for each Region will incur additional costs and increase operational overhead. Option B is incorrect
because changing the S3 bucket to receive notifications will not affect the delivery of log files from other Regions. Option C is incorrect because creating a new
CloudTrail trail that applies to all Regions will result in duplicate log files for the original Region and also incur additional costs.
NEW QUESTION 75
A security engineer receives a notice from the AWS Abuse team about suspicious activity from a Linux-based Amazon EC2 instance that uses Amazon Elastic
Block Store (Amazon EBS>-based storage The instance is making connections to known malicious addresses
The instance is in a development account within a VPC that is in the us-east-1 Region The VPC contains an internet gateway and has a subnet in us-east-1a and
us-easMb Each subnet is associate with a route table that uses the internet gateway as a default route Each subnet also uses the default network ACL The
suspicious EC2 instance runs within the us-east-1 b subnet. During an initial investigation a security engineer discovers that the suspicious instance is the only
instance that runs in the subnet
Which response will immediately mitigate the attack and help investigate the root cause?
A. Log in to the suspicious instance and use the netstat command to identify remote connections Use the IP addresses from these remote connections to create
deny rules in the security group of the instance Install diagnostic tools on the instance for investigation Update the outbound network ACL for the subnet inus-east-
lb to explicitly deny all connections as the first rule during the investigation of the instance
B. Update the outbound network ACL for the subnet in us-east-1b to explicitly deny all connections as the first rule Replace the security group with a new security
group that allows connections only from a diagnostics security group Update the outbound network ACL for the us-east-1b subnet to remove the deny all rule
Launch a new EC2 instance that has diagnostic tools Assign the new security group to the new EC2 instance Use the new EC2 instance to investigate the
suspicious instance
C. Ensure that the Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the suspicious EC2 instance will not delete upon termination
Terminate the instance Launch a new EC2 instance inus-east-1a that has diagnostic tools Mount the EBS volumes from the terminated instance for investigation
D. Create an AWS WAF web ACL that denies traffic to and from the suspicious instance Attach the AWS WAF web ACL to the instance to mitigate the attack Log
in to the instance and install diagnostic tools to investigate the instance
Answer: B
Explanation:
This option suggests updating the outbound network ACL for the subnet in us-east-1b to explicitly deny all connections as the first rule, replacing the security group
with a new one that only allows connections from a diagnostics security group, and launching a new EC2 instance with diagnostic tools to investigate the
suspicious instance. This option will immediately mitigate the attack and provide the necessary tools for investigation.
NEW QUESTION 76
A Systems Engineer is troubleshooting the connectivity of a test environment that includes a virtual security appliance deployed inline. In addition to using the
virtual security appliance, the Development team wants to use security groups and network ACLs to accomplish various security requirements in the environment.
What configuration is necessary to allow the virtual security appliance to route the traffic?
Answer: C
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#eni-basics Source/destination checking "You must disable source/destination checks if
the instance runs services such as network address translation, routing, or firewalls."
The correct answer is C. Disable the Network Source/Destination check on the security appliance’s elastic network interface.
This answer is correct because disabling the Network Source/Destination check allows the virtual security appliance to route traffic that is not addressed to or from
itself. By default, this check is enabled on all EC2 instances, and it prevents them from forwarding traffic that does not match their own IP or MAC addresses.
However, for a virtual security appliance that acts as a router or a firewall, this check needs to be disabled, otherwise it will drop the traffic that it is supposed to
route12.
The other options are incorrect because:
A. Disabling network ACLs is not a solution, because network ACLs are optional layers of security for the subnets in a VPC. They can be used to allow or deny
traffic based on IP addresses and ports, but they do not affect the routing behavior of the virtual security appliance3.
B. Configuring the security appliance’s elastic network interface for promiscuous mode is not a solution, because promiscuous mode is a mode for a network
interface that causes it to pass all traffic it receives to the CPU, rather than passing only the frames that it is programmed to receive. Promiscuous mode is
normally used for packet sniffing or monitoring, but it does not enable the network interface to route traffic4.
D. Placing the security appliance in the public subnet with the internet gateway is not a solution, because it does not address the routing issue of the virtual
security appliance. The security appliance can be placed in either a public or a private subnet, depending on the network design and security requirements, but it
still needs to have the Network Source/Destination check disabled to route traffic properly5.
References:
1: Enabling or disabling source/destination checks - Amazon Elastic Compute Cloud 2: Virtual security appliance - Wikipedia 3: Network ACLs - Amazon Virtual
Private Cloud 4: Promiscuous mode - Wikipedia 5: NAT instances - Amazon Virtual Private Cloud
NEW QUESTION 77
A company's security engineer is developing an incident response plan to detect suspicious activity in an AWS account for VPC hosted resources. The security
engineer needs to provide visibility for as many AWS Regions as possible.
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)
Answer: BD
Explanation:
To detect suspicious activity in an AWS account for VPC hosted resources, the security engineer needs to use a service that can monitor network traffic and API
calls across all AWS Regions. Amazon GuardDuty is a threat detection service that can do this by analyzing VPC Flow Logs, AWS CloudTrail event logs, and DNS
logs. By activating GuardDuty across all AWS Regions, the security engineer can provide visibility for as many regions as possible. GuardDuty generates findings
that contain details about the potential threats detected in the account. To respond to these findings, the security engineer needs to create a mechanism that can
notify the relevant stakeholders or take remedial actions. One way to do this is to use Amazon EventBridge, which is a serverless event bus service that can
connect AWS services and third-party applications. By creating an EventBridge rule that responds to GuardDuty findings and publishes them to an Amazon Simple
Notification Service (Amazon SNS) topic, the security engineer can enable subscribers of the topic to receive notifications via email, SMS, or other methods. This
is a cost-effective solution that does not require any additional infrastructure or code.
NEW QUESTION 80
A company deploys a distributed web application on a fleet of Amazon EC2 instances. The fleet is behind an Application Load Balancer (ALB) that will be
configured to terminate the TLS connection. All TLS traffic to the ALB must stay secure, even if the certificate private key is compromised.
How can a security engineer meet this requirement?
A. Create an HTTPS listener that uses a certificate that is managed by IAM Certificate Manager (ACM).
B. Create an HTTPS listener that uses a security policy that uses a cipher suite with perfect toward secrecy (PFS).
C. Create an HTTPS listener that uses the Server Order Preference security feature.
D. Create a TCP listener that uses a custom security policy that allows only cipher suites with perfect forward secrecy (PFS).
Answer: A
NEW QUESTION 85
A security engineer needs to see up an Amazon CloudFront distribution for an Amazon S3 bucket that hosts a static website. The security engineer must allow
only specified IP addresses to access the website. The security engineer also must prevent users from accessing the website directly by using S3 URLs.
Which solution will meet these requirements?
Answer: B
NEW QUESTION 90
The Security Engineer is managing a traditional three-tier web application that is running on Amazon EC2 instances. The application has become the target of
increasing numbers of malicious attacks from the Internet.
What steps should the Security Engineer take to check for known vulnerabilities and limit the attack surface? (Choose two.)
A. Use AWS Certificate Manager to encrypt all traffic between the client and application servers.
B. Review the application security groups to ensure that only the necessary ports are open.
C. Use Elastic Load Balancing to offload Secure Sockets Layer encryption.
D. Use Amazon Inspector to periodically scan the backend instances.
E. Use AWS Key Management Services to encrypt all the traffic between the client and application servers.
Answer: BD
Explanation:
The steps that the Security Engineer should take to check for known vulnerabilities and limit the attack surface are:
B. Review the application security groups to ensure that only the necessary ports are open. This is a good practice to reduce the exposure of the EC2
instances to potential attacks from the Internet. Application security groups are a feature of Azure that allow you to group virtual machines and define network
security policies based on those groups1.
D. Use Amazon Inspector to periodically scan the backend instances. This is a service that helps you to identify vulnerabilities and exposures in your EC2
instances and applications. Amazon Inspector can perform automated security assessments based on predefined or custom rules packages2.
NEW QUESTION 91
A company needs to follow security best practices to deploy resources from an AWS CloudFormation template. The CloudFormation template must be able to
configure sensitive database credentials.
The company already uses AWS Key Management Service (AWS KMS) and AWS Secrets Manager. Which solution will meet the requirements?
A. Use a dynamic reference in the CloudFormation template to reference the database credentials in Secrets Manager.
B. Use a parameter in the CloudFormation template to reference the database credential
C. Encrypt the CloudFormation template by using AWS KMS.
D. Use a SecureString parameter in the CloudFormation template to reference the database credentials in Secrets Manager.
E. Use a SecureString parameter in the CloudFormation template to reference an encrypted value in AWS KMS
Answer: A
Explanation:
Option A: This option meets the requirements of following security best practices and configuring sensitive database credentials in the CloudFormation
template. A dynamic reference is a way to specify external values that are stored and managed in other services, such as Secrets Manager, in the stack
templates1. When using a dynamic reference, CloudFormation retrieves the value of the specified reference when necessary during stack and change set
operations1. Dynamic references can be used for certain resources that support them, such as AWS::RDS::DBInstance1. By using a dynamic reference to
reference the database credentials in Secrets Manager, the company can leverage the existing integration between these services and avoid hardcoding the
secret information in the template. Secrets Manager is a service that helps you protect secrets needed to access your applications, services, and IT
resources2. Secrets Manager enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle2.
NEW QUESTION 95
A company's Security Team received an email notification from the Amazon EC2 Abuse team that one or more of the company's Amazon EC2 instances may have
been compromised
Which combination of actions should the Security team take to respond to (be current modem? (Select TWO.)
A. Open a support case with the IAM Security team and ask them to remove the malicious code from the affected instance
B. Respond to the notification and list the actions that have been taken to address the incident
C. Delete all IAM users and resources in the account
D. Detach the internet gateway from the VPC remove aft rules that contain 0.0.0.0V0 from the security groups, and create a NACL rule to deny all traffic Inbound
from the internet
E. Delete the identified compromised instances and delete any associated resources that the Security team did not create.
Answer: DE
Explanation:
these are the recommended actions to take when you receive an abuse notice from AWS8. You should review the abuse notice to see what content or activity was
reported and detach the internet gateway from the VPC to isolate the affected instances from the internet. You should also remove any rules that allow inbound
traffic from 0.0.0.0/0 from the security groups and create a network access control list (NACL) rule to deny all traffic inbound from the internet. You should then
delete the compromised instances and any associated resources
that you did not create. The other options are either inappropriate or unnecessary for responding to the abuse notice.
NEW QUESTION 98
There is a requirement for a company to transfer large amounts of data between IAM and an on-premise location. There is an additional requirement for low
latency and high consistency traffic to IAM. Given these requirements how would you design a hybrid architecture? Choose the correct answer from the options
below
Please select:
A. Provision a Direct Connect connection to an IAM region using a Direct Connect partner.
B. Create a VPN tunnel for private connectivity, which increases network consistency and reduces latency.
C. Create an iPSec tunnel for private connectivity, which increases network consistency and reduces latency.
D. Create a VPC peering connection between IAM and the Customer gateway.
Answer: A
Explanation:
IAM Direct Connect makes it easy to establish a dedicated network connection from your premises to IAM. Using IAM Direct Connect you can establish private
connectivity between IAM and your datacenter, office, or colocation environment which in many cases can reduce your network costs, increase bandwidth
throughput and provide a more consistent network experience than Internet-based connections.
Options B and C are invalid because these options will not reduce network latency Options D is invalid because this is only used to connect 2 VPC's
For more information on IAM direct connect, just browse to the below URL: https://IAM.amazon.com/directconnect
The correct answer is: Provision a Direct Connect connection to an IAM region using a Direct Connect partner. omit your Feedback/Queries to our Experts
Answer: CD
A. Configure the DB instance to allow public access Update the DB instance security group to allow access from the Lambda public address space for the AWS
Region
B. Deploy the Lambda functions inside the VPC Attach a network ACL to the Lambda subnet Provide outbound rule access to the VPC CIDR range only Update
the DB instance security group to allow traffic from 0 0 0 0/0
C. Deploy the Lambda functions inside the VPC Attach a security group to the Lambda functions Provide outbound rule access to the VPC CIDR range only
Update the DB instance security group to allow traffic from the Lambda security group
D. Peer the Lambda default VPC with the VPC that hosts the DB instance to allow direct network access without the need for security groups
Answer: C
Explanation:
The AWS documentation states that you can deploy the Lambda functions inside the VPC and attach a security group to the Lambda functions. You can then
provide outbound rule access to the VPC CIDR range only and update the DB instance security group to allow traffic from the Lambda security group. This method
is the most secure way to meet the requirements.
References: : AWS Lambda Developer Guide
Answer: B
A. The IAM rote does not have permission to use the KMS CreateKey operation.
B. The S3 bucket lacks a policy that allows access to the customer managed key that encrypts the objects.
C. The IAM rote does not have permission to use the customer managed key that encrypts the objects that are in the S3 bucket.
D. The ACL of the S3 objects does not allow read access for the objects when the objects ace encrypted at rest.
Answer: C
Explanation:
When using server-side encryption with AWS KMS keys (SSE-KMS), the requester must have both Amazon S3 permissions and AWS KMS permissions to access
the objects. The Amazon S3 permissions are for the bucket and object operations, such as s3:ListBucket and s3:GetObject. The AWS KMS permissions are for
the key operations, such as kms:GenerateDataKey and kms:Decrypt. In this case, the IAM role has the necessary Amazon S3 permissions, but not the AWS KMS
permissions to use the customer managed key that encrypts the objects. Therefore, the IAM role receives an access denied message when trying to access the
objects. Verified References:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/troubleshoot-403-errors.html
https://repost.aws/knowledge-center/s3-access-denied-error-kms
https://repost.aws/knowledge-center/cross-account-access-denied-error-s3
A. Verify that the KMS key policy specifies a deny statement that prevents access to the key by using the aws SourcelP condition key Check that the range
includes the EC2 instance IP address that is associated with the EBS volume
B. Verify that the KMS key that is associated with the EBS volume is set to the Symmetric key type
C. Verify that the KMS key that is associated with the EBS volume is in the Enabled state
D. Verify that the EC2 role that is associated with the instance profile has the correct 1AM instance policy to launch an EC2 instance with the EBS volume
E. Verify that the key that is associated with the EBS volume has not expired and needs to be rotated
Answer: CD
Explanation:
To troubleshoot the issue of an EC2 instance failing to start and transitioning to a terminated state when it has an EBS volume encrypted with an AWS KMS
customer managed key, a security engineer should take the following steps:
* C. Verify that the KMS key that is associated with the EBS volume is in the Enabled state. If the key is not enabled, it will not function properly and could cause
the EC2 instance to fail.
* D. Verify that the EC2 role that is associated with the instance profile has the correct IAM instance policy to launch an EC2 instance with the EBS volume. If the
instance does not have the necessary permissions, it may not be able to mount the volume and could cause the instance to fail.
Therefore, options C and D are the correct answers.
A. Analyze an IAM Identity and Access Management (IAM) use report from IAM Trusted Advisor to see when the access key was last used.
B. Analyze Amazon CloudWatch Logs for activity by searching for the access key.
C. Analyze VPC flow logs for activity by searching for the access key
D. Analyze a credential report in IAM Identity and Access Management (IAM) to see when the access key was last used.
Answer: A
Explanation:
To assess the impact of the exposed access key, the security engineer should recommend the following solution:
Analyze an IAM use report from AWS Trusted Advisor to see when the access key was last used. This allows the security engineer to use a tool that provides
information about IAM entities and credentials in their account, and check if there was any unauthorized activity with the exposed access key.
Answer: D
Explanation:
The possible reason that the IAM user cannot access the objects in the S3 bucket is D. The KMS key policy has been edited to remove the ability for the AWS
account to have full access to the key.
This answer is correct because the KMS key policy is the primary way to control access to the KMS key, and it must explicitly allow the AWS account to have full
access to the key. If the KMS key policy has been edited to remove this permission, then the IAM policy that grants kms:Decrypt permission to the IAM user has no
effect, and the IAM user cannot decrypt the objects in the S3 bucket12.
The other options are incorrect because:
A. The IAM policy does not need to allow the kms:DescribeKey permission, because this permission is not required for decrypting objects in S3 using SSE-
KMS. The kms:DescribeKey permission allows getting information about a KMS key, such as its creation date, description, and key state3.
B. The S3 bucket has not been changed to use the AWS managed key to encrypt objects at rest, because this would not cause an Access Denied message for
the IAM user. The AWS managed key is a default KMS key that is created and managed by AWS for each AWS account and Region. The IAM user does not need
any permissions on this key to use it for SSE-KMS4.
C. An S3 bucket policy does not need to be added to allow the IAM user to access the objects, because the IAM user already has s3:List* and s3:Get*
permissions for the S3 bucket and its objects through an IAM policy. An S3 bucket policy is an optional way to grant cross-account access or public access to an
S3 bucket5.
References:
1: Key policies in AWS KMS 2: Using server-side encryption with AWS KMS keys (SSE-KMS) 3: AWS KMS API Permissions Reference 4: Using server-side
encryption with Amazon S3 managed keys (SSE-S3) 5: Bucket policy examples
A. Configure trusted access for AWS System Manager in Organizations Configure a bastion host from the management account Replace SSH and RDP by using
Systems Manager Session Manager from the management account Configure Session Manager logging to Amazon CloudWatch Logs
B. Replace SSH and RDP with AWS Systems Manager Session Manager Install Systems Manager Agent (SSM Agent) on the instances Attach the
C. AmazonSSMManagedlnstanceCore role to the instances Configure session data streaming to Amazon CloudWatch Logs Create a separate logging account
that has appropriate cross-account permissions to audit the log data
D. Install a bastion host in the management account Reconfigure all SSH and RDP to allow access only from the bastion host Install AWS Systems Manager
Agent (SSM Agent) on the bastion host Attach the AmazonSSMManagedlnstanceCore role to the bastion host Configure session data streaming to Amazon
CloudWatch Logs in a separate logging account to audit log data
E. Replace SSH and RDP with AWS Systems Manager State Manager Install Systems Manager Agent (SSM Agent) on the instances Attach
theAmazonSSMManagedlnstanceCore role to the instances Configure session data streaming to AmazonCloudTrail Use CloudTrail Insights to analyze the trail
data
Answer: C
Explanation:
To meet the requirements of securing access management and implementing a centralized logging solution, the most secure solution would be to:
Install a bastion host in the management account.
Reconfigure all SSH and RDP to allow access only from the bastion host.
Install AWS Systems Manager Agent (SSM Agent) on the bastion host.
Attach the AmazonSSMManagedlnstanceCore role to the bastion host.
Configure session data streaming to Amazon CloudWatch Logs in a separate logging account to audit log data
This solution provides the following security benefits:
It uses AWS Systems Manager Session Manager instead of traditional SSH and RDP protocols, which provides a secure method for accessing EC2 instances
without requiring inbound firewall rules or open ports.
It provides audit trails by configuring Session Manager logging to Amazon CloudWatch Logs and creating a separate logging account to audit the log data.
It uses the AWS Systems Manager Agent to automate common administrative tasks and improve the security posture of the instances.
The separate logging account with cross-account permissions provides better data separation and improves security posture.
https://aws.amazon.com/solutions/implementations/centralized-logging/
A developer at a company uses an SSH key to access multiple Amazon EC2 instances. The company discovers that the SSH key has been posted on a public
GitHub repository. A security engineer verifies that the key has not been used recently.
How should the security engineer prevent unauthorized access to the EC2 instances?
Answer: C
Explanation:
To prevent unauthorized access to the EC2 instances, the security engineer should do the following:
Restrict SSH access in the security group to only known corporate IP addresses. This allows the security engineer to use a virtual firewall that controls inbound
and outbound traffic for their EC2 instances, and limit SSH access to only trusted sources.
A. Create a new role and add each user to the IAM role
B. Use the IAM groups and add users, based upon their role, to different groups and apply the policy to group
C. Create a policy and apply it to multiple users using a JSON script
D. Create an S3 bucket policy with unlimited access which includes each user's IAM account ID
Answer: B
Explanation:
Option A is incorrect since you don't add a user to the IAM Role Option C is incorrect since you don't assign multiple users to a policy Option D is incorrect since
this is not an ideal approach
An IAM group is used to collectively manage users who need the same set of permissions. By having groups, it becomes easier to manage permissions. So if you
change the permissions on the group scale, it will affect all the users in that group
For more information on IAM Groups, just browse to the below URL: https://docs.IAM.amazon.com/IAM/latest/UserGuide/id_eroups.html
The correct answer is: Use the IAM groups and add users, based upon their role, to different groups and apply the policy to group
Submit your Feedback/Queries to our Experts
B)
C)
A. Option A
B. Option B
C. Option C
Answer: A
A. Add an inbound rule to the security group to allow traffic from 0.0.0.0/0 for all port
B. Add an outbound rule to the security group to allow traffic to 0.0.0.0/0 for all port
C. Then immediately delete these rules.
D. Remove the port 22 security group rul
E. Attach an instance role policy that allows AWS Systems Manager Session Manager connections so that the forensics team can access the target instance.
F. Create a network ACL that is associated with the target instance's subne
G. Add a rule at the top of the inbound rule set to deny all traffic from 0.0.0.0/0. Add a rule at the top of the outbound rule set to deny all traffic to 0.0.0.0/0.
H. Create an AWS Systems Manager document that adds a host-level firewall rule to block all inbound traffic and outbound traffi
I. Run the document on the target instance.
Answer: C
How can the security engineer improve the security at the edge of the solution to defend against this type of attack?
Answer: C
Explanation:
To improve the security at the edge of the solution to defend against a large, layer 7 DDoS attack, the security engineer should do the following:
Configure AWS WAF with a rate-based rule that imposes a rate limit that automatically blocks requests when the rate limit is exceeded. This allows the security
engineer to use a rule that tracks the number of requests from a single IP address and blocks subsequent requests if they exceed a specified threshold within a
specified time period.
A. Use Amazon CloudWatch monitoring to capture Amazon EC2 and networking metrics Visualizemetrics using Amazon CloudWatch dashboards.
B. Run the Amazon Kinesis Agent to write the status data to Amazon Kinesis Data Firehose Store the streaming data from Kinesis Data Firehose in Amazon
Redshif
C. (hen run a script on the pool data and analyze the data in Amazon Redshift
D. Write the status data directly to a public Amazon S3 bucket from the health-checking component Configure S3 events to invoke an IAM Lambda function that
analyzes the data
E. Generate events from the health-checking component and send them to Amazon CloudWatch Events.Include the status data as event payload
F. Use CloudWatch Events rules to invoke an IAM Lambda function that analyzes the data.
Answer: A
Explanation:
Amazon CloudWatch monitoring is a service that collects and tracks metrics from AWS resources and applications, and provides visualization tools and alarms to
monitor performance and availability1. The health status data in JSON format can be sent to CloudWatch as custom metrics2, and then displayed in CloudWatch
dashboards3. The other options are either inefficient or insecure for monitoring application availability in near-real time.
A. Create a service role that has a composite principal that contains each service that needs the necessary permission
B. Configure the role to allow the sts:AssumeRole action.
C. Create a service role that has cloudformation.amazonaws.com as the service principa
D. Configure the role to allow the sts:AssumeRole action.
E. For each required set of permissions, add a separate policy to the role to allow those permission
F. Add the ARN of each CloudFormation stack in the resource field of each policy.
G. For each required set of permissions, add a separate policy to the role to allow those permission
H. Add the ARN of each service that needs the per-missions in the resource field of the corresponding policy.
I. Update each stack to use the service role.
J. Add a policy to each member role to allow the iam:PassRole actio
K. Set the policy's resource field to the ARN of the service role.
Answer: BDF
Answer: D
Explanation:
CloudWatchLogsReadOnlyAccess doesn't include "logs:CreateLogStream" but it includes "logs:Get*"
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-identity-based-access-control-cwl.html#:~:te
Answer: B
Explanation:
One can use the IAM Encryption CLI to encrypt the data before sending it across to the S3 bucket. Options A and C are invalid because this would still mean that
data is transferred in plain text Option D is invalid because you cannot just enable client side encryption for the S3 bucket For more information on Encrypting and
Decrypting data, please visit the below URL:
https://IAM.amazonxom/blogs/securirv/how4o-encrvpt-and-decrypt-your-data-with-the-IAM-encryption-cl The correct answer is: Use the IAM Encryption CLI to
encrypt the data first Submit your Feedback/Queries to our Experts
A. Create a CodeCommit repository in the security account using IAM Key Management Service (IAMKMS) tor encryption Require the development team to
migrate the Lambda source code to this repository
B. Store the API key in an Amazon S3 bucket in the security account using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to encrypt
the key Create a resigned URL tor the S3 ke
C. and specify the URL m a Lambda environmental variable in the IAM CloudFormation template Update the Lambda function code to retrieve the key using the
URL and call the API
D. Create a secret in IAM Secrets Manager in the security account to store the API key using IAM Key Management Service (IAM KMS) tor encryption Grant
access to the IAM role used by the Lambda function so that the function can retrieve the key from Secrets Manager and call the API
E. Create an encrypted environment variable for the Lambda function to store the API key using IAM Key Management Service (IAM KMS) tor encryption Grant
access to the IAM role used by the Lambda function so that the function can decrypt the key at runtime
Answer: C
Explanation:
To securely store the API key, the security team should do the following:
Create a secret in AWS Secrets Manager in the security account to store the API key using AWS Key Management Service (AWS KMS) for encryption. This
allows the security team to encrypt and manage the API key centrally, and to configure automatic rotation schedules for it.
Grant access to the IAM role used by the Lambda function so that the function can retrieve the key from Secrets Manager and call the API. This allows the
security team to avoid storing the API key with the source code, and to use IAM policies to control access to the secret.
Answer: A
Explanation:
To meet the requirements of importing their own key material, setting an expiration date on the keys, and deleting keys immediately, the security engineer should
do the following:
Configure AWS KMS and use a custom key store. This allows the security engineer to use a key manager outside of AWS KMS that they own and manage,
such as an AWS CloudHSM cluster or an external key manager.
Create a customer managed CMK with no key material. Import the company’s keys and key material into the CMK. This allows the security engineer to use
their own key material for encryption and decryption operations, and to specify an expiration date for it.
A. To create the keys use AWS Key Management Service (AWS KMS) and the custom key stores with the CloudHSM cluste
B. For auditing, use Amazon Athena
C. To create the keys use Amazon S3 and the custom key stores with the CloudHSM cluste
D. For auditing use AWS CloudTrail.
E. To create the keys use AWS Key Management Service (AWS KMS) and the custom key stores with the CloudHSM cluste
F. For auditing, use Amazon GuardDuty.
G. To create the keys use AWS Key Management Service (AWS KMS) and the custom key stores with the CloudHSM cluste
H. For auditing, use AWS CloudTrail.
Answer: D
Explanation:
AWS KMS supports asymmetric KMS keys that represent a mathematically related RSA, elliptic curve (ECC), or SM2 (China Regions only) public and private key
pair. These key pairs are generated in AWS KMS hardware security modules certified under the FIPS 140-2 Cryptographic Module Validation Program, except in
the China (Beijing) and China (Ningxia) Regions. The private key never leaves the AWS KMS HSMs unencrypted.
https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html
A. Use EC2 Dedicated Instances for the tokenization component of the application.
B. Place the EC2 instances that manage the tokenization process into a partition placement group.
C. Create a separate VP
D. Deploy new EC2 instances into the separate VPC to support the data tokenization.
E. Deploy the tokenization code onto AWS Nitro Enclaves that are hosted on EC2 instances.
Answer: D
Explanation:
AWS Nitro Enclaves are isolated and hardened virtual machines that run on EC2 instances and provide a secure environment for processing sensitive data. Nitro
Enclaves have no persistent storage, interactive access, or external networking, and they can only communicate with the parent instance through a secure local
channel. Nitro Enclaves also support cryptographic attestation, which allows verifying the identity and integrity of the enclave and its code. Nitro Enclaves are ideal
for implementing data protection solutions such as tokenization, encryption, and key management.
Using Nitro Enclaves for the tokenization component of the application meets the requirements of isolating the sensitive data from other parts of the application,
encrypting and storing the credit card numbers securely, and issuing tokens to replace the numbers. Other components of the application will not be able to access
or store the credit card numbers, as they are only available within the enclave.
A. Configure the S3 Block Public Access feature for the AWS account.
B. Configure the S3 Block Public Access feature for all objects that are in the bucket.
C. Deactivate ACLs for objects that are in the bucket.
D. Use AWS PrivateLink for Amazon S3 to access the bucket.
Answer: D
A. Use TLS certificates from AWS Certificate Manager (ACM) with an Application Load Balancer.Deploy self-signed certificates on the EC2 instance
B. Ensure that the database client software uses a TLS connection to Amazon RD
C. Enable encryption of the RDS DB instanc
D. Enable encryption on the Amazon Elastic Block Store (Amazon EBS) volumes that support the EC2 instances.
E. Use TLS certificates from a third-party vendor with an Application Load Balance
F. Install the same certificates on the EC2 instance
G. Ensure that the database client software uses a TLS connection to Amazon RD
H. Use AWS Secrets Manager for client-side encryption of application data.
I. Use AWS CloudHSM to generate TLS certificates for the EC2 instance
J. Install the TLS certificates on the EC2 instance
K. Ensure that the database client software uses a TLS connection to Amazon RD
L. Use the encryption keys form CloudHSM for client-side encryption of application data.
M. Use Amazon CloudFront with AWS WA
N. Send HTTP connections to the origin EC2 instance
O. Ensure that the database client software uses a TLS connection to Amazon RD
P. Use AWS Key Management Service (AWS KMS) for client-side encryption of application data before the data is stored in the RDS database.
Answer: A
A. Set up separate AWS Lambda functions for GuardDuty, 1AM Access Analyzer, and Macie to call each service's public API to retrieve high-severity finding
B. Use Amazon Simple Notification Service (Amazon SNS) to send the email alert
C. Create an Amazon EventBridge rule to invoke the functions on a schedule.
D. Create an Amazon EventBridge rule with a pattern that matches Security Hub findings events with high severit
E. Configure the rule to send the findings to a target Amazon Simple Notification Service (Amazon SNS) topi
F. Subscribe the desired email addresses to the SNS topic.
G. Create an Amazon EventBridge rule with a pattern that matches AWS Control Tower events with high severit
H. Configure the rule to send the findings to a target Amazon Simple Notification Service (Amazon SNS) topi
I. Subscribe the desired email addresses to the SNS topic.
J. Host an application on Amazon EC2 to call the GuardDuty, 1AM Access Analyzer, and Macie APIs.Within the application, use the Amazon Simple Notification
Service (Amazon SNS) API to retrieve high-severity findings and to send the findings to an SNS topi
K. Subscribe the desired email addresses to the SNS topic.
Answer: B
Explanation:
The AWS documentation states that you can create an Amazon EventBridge rule with a pattern that matches Security Hub findings events with high severity. You
can then configure the rule to send the findings to a target Amazon Simple Notification Service (Amazon SNS) topic. You can subscribe the desired email
addresses to the SNS topic. This method is the least operational overhead way to meet the requirements.
References: : AWS Security Hub User Guide
A. Create a security group with a rule that denies Inbound connections from 0.0.0 0/0 on port 00. Attach this security group to the ALB to overwrite more
permissive rules from the ALB's default securitygroup.
B. Create a network ACL that denies inbound connections from 0 0.0.0/0 on port 80 Associate the network ACL with the VPC s internet gateway
C. Create a network ACL that allows outbound connections to the VPC IP range on port 443 only.Associate the network ACL with the VPC's internet gateway.
D. Create a security group with a single inbound rule that allows connections from 0.0.0 0/0 on port 443.Ensure this security group is the only one associated with
the ALB
Answer: D
Explanation:
To ensure that the load balancer only accepts connections over port 443, the security engineer should do the following:
Create a security group with a single inbound rule that allows connections from 0.0.0.0/0 on port 443.
This means that the security group allows HTTPS traffic from any source IP address.
Ensure this security group is the only one associated with the ALB. This means that the security group overrides any other rules that might allow HTTP traffic
on port 80.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic as the target of the Lambda functio
C. Subscribe an email endpoint to the SNS topic to receive published messages.
D. Create an Amazon Kinesis Data Firehose delivery strea
E. Integrate the delivery stream with Amazon EventBridg
F. Create an EventBridge rule that has a filter to detect critical Security Hub finding
G. Configure the delivery stream to send the findings to an email address.
H. Create an Amazon EventBridge rule to detect critical Security Hub finding
I. Create an Amazon Simple Notification Service (Amazon SNS) topic as the target of the EventBridge rul
J. Subscribe an email endpoint to the SNS topic to receive published messages.
K. Create an Amazon EventBridge rule to detect critical Security Hub finding
L. Create an Amazon Simple Email Service (Amazon SES) topic as the target of the EventBridge rul
M. Use the Amazon SES API to format the messag
N. Choose an email address to be the recipient of the message.
Answer: C
Explanation:
This solution meets the requirement of receiving an email notification about critical findings in AWS Security Hub. Amazon EventBridge is a serverless event bus
that can receive events from AWS services and third-party sources, and route them to targets based on rules and filters. Amazon SNS is a fully managed pub/sub
service that can send messages to various endpoints, such as email, SMS, mobile push, and HTTP. By creating an EventBridge rule that detects critical Security
Hub findings and sends them to an SNS topic, the company can leverage the existing integration between these services and avoid writing custom code or
managing servers. By subscribing an email endpoint to the SNS topic, the company can receive published messages in their inbox.
A. Import the key material into AWS Key Management Service (AWS KMS).
B. Manually upload the new host key to the AWS trusted host keys database.
C. Ensure that the AmazonSSMManagedInstanceCore policy is attached to the EC2 instance profile.
D. Create a new SSH key pair for the EC2 instance.
Answer: B
Explanation:
To set up a CloudFront distribution for an S3 bucket that hosts a static website, and to allow only specified IP addresses to access the website, the following steps
are required:
Create a CloudFront origin access identity (OAI), which is a special CloudFront user that you can associate with your distribution. An OAI allows you to restrict
access to your S3 content by using signed URLs or signed cookies. For more information, see Using an origin access identity to restrict access to your Amazon S3
content.
Create the S3 bucket policy so that only the OAI has access. This will prevent users from accessing the website directly by using S3 URLs, as they will receive
an Access Denied error. To do this, use the AWS Policy Generator to create a bucket policy that grants s3:GetObject permission to the OAI, and attach it to the S3
bucket. For more information, see Restricting access to Amazon S3 content by using an origin access identity.
Create an AWS WAF web ACL and add an IP set rule. AWS WAF is a web application firewall service that lets you control access to your web applications. An
IP set is a condition that specifies a list of IP addresses or IP address ranges that requests originate from. You can use an IP set rule to allow or block
requests based on the IP addresses of the requesters. For more information, see Working with IP match conditions.
Associate the web ACL with the CloudFront distribution. This will ensure that the web ACL filters all requests for your website before they reach your origin.
You can do this by using the AWS WAF console, API, or CLI. For more information, see Associating or disassociating a web ACL with a CloudFront distribution.
This solution will meet the requirements of allowing only specified IP addresses to access the website and preventing direct access by using S3 URLs.
The other options are incorrect because they either do not create a CloudFront distribution for the S3 bucket (A), do not use an OAI to restrict access to the S3
bucket ©, or do not use AWS WAF to block traffic from outside the specified IP addresses (D).
Verified References:
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-ip-conditions.html
A. Create a recurring Amazon Inspector assessment run that runs every day and uses the Network Reachability packag
B. Create an Amazon CloudWatch rule that invokes an IAM Lambda function when an assessment run start
C. Configure the Lambda function to retrieve and evaluate the assessment run report when it complete
D. Configure the Lambda function also to publish an Amazon Simple Notification Service (Amazon SNS) notification if there are any violations for unrestricted
incoming SSH traffic.
E. Use the restricted-ssh IAM Config managed rule that is invoked by security group configuration changes that are not complian
F. Use the IAM Config remediation feature to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
G. Configure VPC Flow Logs for the VP
H. and specify an Amazon CloudWatch Logs grou
I. Subscribe the CloudWatch Logs group to an IAM Lambda function that parses new log entries, detects successful connections on port 22, and publishes a
notification through Amazon Simple Notification Service (Amazon SNS).
J. Create a recurring Amazon Inspector assessment run that runs every day and uses the Security Best Practices packag
K. Create an Amazon CloudWatch rule that invokes an IAM Lambda function when an assessment run start
L. Configure the Lambda function to retrieve and evaluate the assessment run report when it complete
M. Configure the Lambda function also to publish an Amazon Simple Notification Service (Amazon SNS) notification if there are any violations for unrestricted
incoming SSH traffic.
Answer: B
Explanation:
The most operationally efficient solution to implement a near-real-time monitoring and alerting solution that will notify administrators of security group violations is
to use the restricted-ssh AWS Config managed rule that is invoked by security group configuration changes that are not compliant. This rule checks whether
security groups that are in use have inbound rules that allow unrestricted SSH traffic. If a violation is detected, AWS Config can use the remediation feature to
publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
Option A is incorrect because creating a recurring Amazon Inspector assessment run that uses the Network Reachability package is not operationally efficient, as
it requires setting up an assessment target and template, running the assessment every day, and invoking a Lambda function to retrieve and evaluate the
assessment report. It also does not provide near-real-time monitoring and alerting, as it depends on the frequency and duration of the assessment run.
Option C is incorrect because configuring VPC Flow Logs for the VPC and specifying an Amazon CloudWatch Logs group is not operationally efficient, as it
requires creating a log group and stream, enabling VPC Flow Logs for each subnet or network interface, and subscribing a Lambda function to parse and analyze
the log entries. It also does not provide proactive monitoring and alerting, as it only detects successful connections on port 22 after they have occurred.
Option D is incorrect because creating a recurring Amazon Inspector assessment run that uses the Security
Best Practices package is not operationally efficient, for the same reasons as option A. It also does not provide specific monitoring and alerting for security group
violations, as it covers a broader range of security issues. References:
[AWS Config Rules]
[AWS Config Remediation]
[Amazon Inspector]
[VPC Flow Logs]
* SCS-C02 Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* SCS-C02 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year