11.11 Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

Good News !!! SCS-C02 AWS Certified Security - Specialty is now Stable and With Pass Result

SCS-C02 Practice Exam Questions and Answers

AWS Certified Security - Specialty

Last Update 2 days ago
Total Questions : 467

AWS Certified Specialty is stable now with all latest exam questions are added 2 days ago. Incorporating SCS-C02 practice exam questions into your study plan is more than just a preparation strategy.

SCS-C02 exam questions often include scenarios and problem-solving exercises that mirror real-world challenges. Working through SCS-C02 dumps allows you to practice pacing yourself, ensuring that you can complete all AWS Certified Specialty practice test within the allotted time frame.

SCS-C02 PDF

SCS-C02 PDF (Printable)
$43.75
$124.99

SCS-C02 Testing Engine

SCS-C02 PDF (Printable)
$50.75
$144.99

SCS-C02 PDF + Testing Engine

SCS-C02 PDF (Printable)
$63.7
$181.99
Question # 1

A company uses AWS Organizations and has production workloads across multiple AWS accounts. A security engineer needs to design a solution that will proactively monitor for suspicious behavior across all the accounts that contain production workloads.

The solution must automate remediation of incidents across the production accounts. The solution also must publish a notification to an Amazon Simple Notification Service (Amazon SNS) topic when a critical security finding is detected. In addition, the solution must send all security incident logs to a dedicated account.

Which solution will meet these requirements?

Options:

A.  

Activate Amazon GuardDuty in each production account. In a dedicated logging account. aggregate all GuardDuty logs from each production account.Remediate incidents by configuring GuardDuty to directly invoke an AWS Lambda function. Configure the Lambda function to also publish notifications to the SNS topic.

B.  

Activate AWS security Hub in each production account. In a dedicated logging account. aggregate all security Hub findings from each production account. Remediate incidents by ustng AWS Config and AWS Systems Manager. Configure Systems Manager to also pub11Sh notifications to the SNS topic.

C.  

Activate Amazon GuardDuty in each production account. In a dedicated logging account. aggregate all GuardDuty logs from each production account Remediate incidents by using Amazon EventBridge to invoke a custom AWS Lambda function from the GuardDuty findings. Configure the Lambda function to also publish notifications to the SNS topic.

D.  

Activate AWS Security Hub in each production account. In a dedicated logging account. aggregate all Security Hub findings from each production account. Remediate incidents by using Amazon EventBridge to invoke a custom AWS Lambda function from the Security Hub findings. Configure the Lambda function to also publish notifications to the SNS topic.

Discussion 0
Question # 2

A security engineer is working with a company to design an ecommerce application. The application will run on Amazon EC2 instances that run in an Auto Scaling group behind an Application Load Balancer (ALB). The application will use an Amazon RDS DB instance for its database.

The only required connectivity from the internet is for HTTP and HTTPS traffic to the application. The application must communicate with an external payment provider that allows traffic only from a preconfigured allow list of IP addresses. The company must ensure that communications with the external payment provider are not interrupted as the environment scales.

Which combination of actions should the security engineer recommend to meet these requirements? (Select THRE

E.  

)

Options:

A.  

Deploy a NAT gateway in each private subnet for every Availability Zone that is in use.

B.  

Place the DB instance in a public subnet.

C.  

Place the DB instance in a private subnet.

D.  

Configure the Auto Scaling group to place the EC2 instances in a public subnet.

E.  

Configure the Auto Scaling group to place the EC2 instances in a private subnet.

F.  

Deploy the ALB in a private subnet.

Discussion 0
Question # 3

A company is undergoing a layer 3 and layer 4 DDoS attack on its web servers running on IAM.

Which combination of IAM services and features will provide protection in this scenario? (Select THREE).

Options:

A.  

Amazon Route 53

B.  

IAM Certificate Manager (ACM)

C.  

Amazon S3

D.  

IAM Shield

E.  

Elastic Load Balancer

F.  

Amazon GuardDuty

Discussion 0
Question # 4

An IAM user receives an Access Denied message when the user attempts to access objects in an Amazon S3 bucket. The user and the S3 bucket are in the same AWS account. The S3 bucket is configured to use server-side encryption with AWS KMS keys (SSE-KMS) to encrypt all of its objects at rest by using a customer managed key from the same AWS account. The S3 bucket has no bucket policy defined. The IAM user has been granted permissions through an IAM policy that allows the kms:Decrypt permission to the customer managed key. The IAM policy also allows the s3:List* and s3:Get* permissions for the S3 bucket and its objects.

Which of the following is a possible reason that the IAM user cannot access the objects in the S3 bucket?

Options:

A.  

The IAM policy needs to allow the kms:DescribeKey permission.

B.  

The S3 bucket has been changed to use the AWS managed key to encrypt objects at rest.

C.  

An S3 bucket policy needs to be added to allow the IAM user to access the objects.

D.  

The KMS key policy has been edited to remove the ability for the AWS account to have full access to the key.

Discussion 0
Question # 5

A company is storing data in Amazon S3 Glacier. A security engineer implemented a new vault lock policy for 10 TB of data and called the initiate-vault-lock operation 12hours ago. The audit team identified a typo in the policy that is allowing unintended access to the vault.

What is the MOST cost-effective way to correct this error?

Options:

A.  

Call the abort-vault-lock operation. Update the policy. Call the initiate-vault-lock operation again.

B.  

Copy the vault data to a new S3 bucket. Delete the vault. Create a new vault with the data.

C.  

Update the policy to keep the vault lock in place

D.  

Update the policy. Call the initiate-vault-lock operation again to apply the new policy.

Discussion 0
Question # 6

A company has deployed Amazon GuardDuty and now wants to implement automation for potential threats. The company has decided to start with RDP brute force attacks that come from Amazon EC2 instances in the company’s AWS environment. A security engineer needs to implement a solution that blocks the detected communication from a suspicious instance until investigation and potential remediation can occur.

Which solution will meet these requirements?

Options:

A.  

Configure GuardDuty to send the event to an Amazon Kinesis data stream. Process the event with an Amazon Kinesis Data Analytics for Apache Flink application that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS). Add rules to the network ACL to block traffic to and from the suspicious instance.

B.  

Configure GuardDuty to send the event to Amazon EventBridge (Amazon CloudWatch Events). Deploy an AWS WAF web ACL. Process the event with an AWS Lambda functionthat sends a notification to the company through Amazon Simple Notification Service (Amazon SNS) and adds a web ACL rule to block traffic to and from the suspicious instance.

C.  

Enable AWS Security Hub to ingest GuardDuty findings and send the event to Amazon EventBridge (Amazon CloudWatch Events). Deploy AWS Network Firewall. Process the event with an AWS Lambda function that adds a rule to a Network Firewall firewall policy to block traffic to and from the suspicious instance.

D.  

Enable AWS Security Hub to ingest GuardDuty findings. Configure an Amazon Kinesis data stream as an event destination for Security Hub. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections.

Discussion 0
Question # 7

A company stores sensitive documents in Amazon S3 by using server-side encryption with an IAM Key Management Service (IAM KMS) CMK. A new requirement mandates that the CMK that is used for these documents can be used only for S3 actions.

Which statement should the company add to the key policy to meet this requirement?

A)

Question # 7

B)

Question # 7

Options:

A.  

Option A

B.  

Option B

Discussion 0
Question # 8

A developer is building a serverless application hosted on AWS that uses Amazon Redshift as a data store The application has separate modules for readwrite and read-only functionality The modules need their own database users for compliance reasons

Which combination of steps should a security engineer implement to grant appropriate access? (Select TWO.)

Options:

A.  

Configure cluster security groups for each application module to control access to database users that are required for read-only and readwrite

B.  

Configure a VPC endpoint for Amazon Redshift Configure an endpoint policy that maps database users to each application module, and allow access to the tables that are required for read-only and read/write

C.  

Configure an 1AM policy for each module Specify the ARN of an Amazon Redshift database user that allows the GetClusterCredentials API call

D.  

Create local database users for each module

E.  

Configure an 1AM policy for each module Specify the ARN of an 1AM user that allows the GetClusterCredentials API call

Discussion 0
Question # 9

A security engineer is configuring AWS. Config for an AWS account that uses a new 1AM entity When the security engineer tries to configure AWS. Config rules and automatic remediation options, errors occur in the AWS CloudTrail logs the security engineer sees the following error message "Insufficient delivery policy to s3 bucket DOC-EXAMPLE-BUCKET, unable to write to bucket provided s3 key prefix is 'null'."

Which combination of steps should the security engineer take to remediate this issue? (Select TWO.)

Options:

A.  

Check the Amazon S3 bucket policy Verify that the policy allows the config amazon aws com service to write to the target bucket.

B.  

Verify that the 1AM entity has the permissions necessary to perform the s3 GetBucketAc1 and s3 PutObjecj operations to write to the target bucket.

C.  

Verify that the Amazon S3 bucket policy has the permissions necessary to perform the s3: GetBucketAcl and s3 PutObject" operations to write to the target bucket.

D.  

Check the policy that is associated with the 1AM entity Verify that the policy allows the config amazonaws com service to write to the target bucket.

E.  

Verify that the AWS Config service role has permissions to invoke the BatchGetResourceConfig action instead of the GetResourceConfigHistory action and s3 PutObject" operation.

Discussion 0
Question # 10

A company is running an application on Amazon EC2 instances in an Auto Scaling group. The application stores logs locally. A security engineer noticed that logs were lost after a scale-in event. The security engineer needs to recommend a solution to ensure the durability and availability of log data All logs must be kept for a minimum of 1 year for auditing purposes. What should the security engineer recommend?

Options:

A.  

Within the Auto Scaling lifecycle, add a hook to create and attach an Amazon Elastic Block Store (Amazon EBS) log volume each time an EC2 instance is created. When the instance is terminated, the EBS volume can be reattached to another instance for log review.

B.  

Create an Amazon Elastic File System (Amazon EFS) file system and add a command in the user data section of the Auto Scaling launch template to mount the EFS file system during EC2 instance creation. Configure a process on the instance to copy the logs once a day from an instance Amazon Elastic Block Store (Amazon EBS) volume to a directory in the EFS file system.

C.  

Add an Amazon CloudWatch agent into the AMI used in the Auto Scaling group. Configure the CloudWatch agent to send the logs to Amazon CloudWatch Logs for review,

D.  

Within the Auto Scaling lifecycle, add a lifecycle hook at the terminating state transition and alert the engineering team by using a lifecycle notification to Amazon Simple Notification Service (Amazon SNS). Configure the hook to remain in the Terminating:Wait state for 1 hour to allow manual review of the security logs prior to instance termination.

Discussion 0
Get SCS-C02 dumps and pass your exam in 24 hours!

Free Exams Sample Questions