Pre-Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

SAP-C02 AWS Certified Solutions Architect - Professional is now Stable and With Pass Result | Test Your Knowledge for Free

Exams4sure Dumps

SAP-C02 Practice Questions

AWS Certified Solutions Architect - Professional

Last Update 16 hours ago
Total Questions : 645

Dive into our fully updated and stable SAP-C02 practice test platform, featuring all the latest AWS Certified Professional exam questions added this week. Our preparation tool is more than just a Amazon Web Services study aid; it's a strategic advantage.

Our free AWS Certified Professional practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about SAP-C02. Use this test to pinpoint which areas you need to focus your study on.

SAP-C02 PDF

SAP-C02 PDF (Printable)
$43.75
$124.99

SAP-C02 Testing Engine

SAP-C02 PDF (Printable)
$50.75
$144.99

SAP-C02 PDF + Testing Engine

SAP-C02 PDF (Printable)
$63.7
$181.99
Question # 1

A company stores a static website on Amazon S3. AWS Lambda functions retrieve content from an S3 bucket and serve the content as a website. An Application Load Balancer (ALB) directs incoming traffic to the Lambda functions. An Amazon CloudFront distribution routes requests to the AL

B.  

The company has set up an AWS Certificate Manager (ACM) certificate on the HTTPS listener of the AL

B.  

The company needs all users to communicate with the website through HTTPS. HTTP users must not receive an error.

Which combination of steps will meet these requirements? (Select THRE

E.  

)

Options:

A.  

Configure the ALB with a TCP listener on port 443 for passthrough to backend systems.

B.  

Create an S3 bucket policy that denies access to the S3 bucket if the aws:SecureTransport request is false.

C.  

Configure HTTP to HTTPS redirection on the S3 bucket.

D.  

Set the origin protocol policy to HTTPS Only for CloudFront.

E.  

Set the viewer protocol policy to HTTPS Only for CloudFront.

F.  

Set the viewer protocol policy to Redirect HTTP to HTTPS for CloudFront.

Discussion 0
Question # 2

A company manages hundreds of AWS accounts centrally in an organization in AWS Organizations. The company recently started to allow product teams to create and manage their own S3 access points in their accounts. The S3 access points can be accessed only within VPCs, not on the internet.

What is the MOST operationally efficient way to enforce this requirement?

Options:

A.  

Set the S3 access point resource policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to vpc.

B.  

Create an SCP at the root level in the organization to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to VP

C.  

C.  

Use AWS CloudFormation StackSets to create a new IAM policy in each AWS account that allows the s3:CreateAccessPoint action only if the s3:AccessPointNetworkOrigin condition key evaluates to VP

C.  

D.  

Set the S3 bucket policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to VP

C.  

Discussion 0
Question # 3

A company is running a workload that consists of thousands of Amazon EC2 instances. The workload is running in a VPC that contains several public subnets and private subnets. The public subnets have a route for 0.0.0.0/0 to an existing internet gateway. The private subnets have a route for 0.0.0.0/0 to an existing NAT gateway.

A solutions architect needs to migrate the entire fleet of EC2 instances to use IPv6. The EC2 instances that are in private subnets must not be accessible from the public internet.

What should the solutions architect do to meet these requirements?

Options:

A.  

Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Update all the VPC route tables, and add a route for ::/0 to the internet gateway.

B.  

Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Update the VPC route tables for all private subnets, and add a route for ::/0 to the NAT gateway.

C.  

Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the VPC and all subnets. Create an egress-only internet gateway. Update the VPC route tables for all private subnets, and add a route for ::/0 to the egress-only internet gateway.

D.  

Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all subnets. Create a new NAT gateway, and enable IPv6 support. Update the VPC route tables for all private subnets, and add a route for ::/0 to the IPv6-enabled NAT gateway.

Discussion 0
Question # 4

A solutions architect must analyze a company ' s Amazon EC2 Instances and Amazon Elastic Block Store (Amazon EBS) volumes to determine whether the company is using resources efficiently The company is running several large, high-memory EC2 instances lo host database dusters that are deployed in active/passive configurations The utilization of these EC2 instances varies by the applications that use the databases, and the company has not identified a pattern

The solutions architect must analyze the environment and take action based on the findings.

Which solution meets these requirements MOST cost-effectively?

Options:

A.  

Create a dashboard by using AWS Systems Manager OpsConter Configure visualizations tor Amazon CloudWatch metrics that are associated with the EC2 instances and their EBS volumes Review the dashboard periodically and identify usage patterns Right size the EC2 instances based on the peaks in the metrics

B.  

Turn on Amazon CloudWatch detailed monitoring for the EC2 instances and their EBS volumes Create and review a dashboard that is based on the metrics Identify usage patterns Right size the FC? instances based on the peaks In the metrics

C.  

Install the Amazon CloudWatch agent on each of the EC2 Instances Turn on AWS Compute Optimizer, and let it run for at least 12 hours Review the recommendations from Compute Optimizer, and right size the EC2 instances as directed

D.  

Sign up for the AWS Enterprise Support plan Turn on AWS Trusted Advisor Wait 12 hours Review the recommendations from Trusted Advisor, and rightsize the EC2 instances as directed

Discussion 0
Question # 5

A media storage application uploads user photos to Amazon S3 for processing by AWS Lambda functions. Application state is stored in Amazon DynamoOB tables. Users are reporting that some uploaded photos are not being processed properly. The application developers trace the logs and find that Lambda is experiencing photo processing issues when thousands of users upload photos simultaneously. The issues are the result of Lambda concurrency limits and the performance of DynamoDB when data is saved.

Which combination of actions should a solutions architect take to increase the performance and reliability of the application? (Select TWO.)

Options:

A.  

Evaluate and adjust the RCUs for the DynamoDB tables.

B.  

Evaluate and adjust the WCUs for the DynamoDB tables.

C.  

Add an Amazon ElastiCache layer to increase the performance of Lambda functions.

D.  

Add an Amazon Simple Queue Service (Amazon SQS) queue and reprocessing logic between Amazon S3 and the Lambda functions.

E.  

Use S3 Transfer Acceleration to provide lower latency to users.

Discussion 0
Question # 6

A company runs a Python script on an Amazon EC2 instance to process data. The script runs every 10 minutes. The script ingests files from an Amazon S3 bucket and processes the files. On average, the script takes approximately 5 minutes to process each file The script will not reprocess a file that the script has already processed.

The company reviewed Amazon CloudWatch metrics and noticed that the EC2 instance is idle for approximately 40% of the time because of the file processing speed. The company wants to make the workload highly available and scalable. The company also wants to reduce long-term management overhead.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Migrate the data processing script to an AWS Lambda function. Use an S3 event notification to invoke the Lambda function to process the objects when the company uploads the objects.

B.  

Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure Amazon S3 to send event notifications to the SQS queue. Create an EC2 Auto Scaling group with a minimum size of one instance. Update the data processing script to poll the SQS queue. Process the S3 objects that the SQS message identifies.

C.  

Migrate the data processing script to a container image. Run the data processing container on an EC2 instance. Configure the container to poll the S3 bucket for new objects and to process the resulting objects.

D.  

Migrate the data processing script to a container image that runs on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Create an AWS Lambda function that calls the Fargate RunTaskAPI operation when the container processes the file. Use an S3 event notification to invoke the Lambda function.

Discussion 0
Question # 7

A solutions architect needs to assess a newly acquired company’s portfolio of applications and databases. The solutions architect must create a business case to migrate the portfolio to AWS. The newly acquired company runs applications in an on-premises data center. The data center is not well documented. The solutions architect cannot immediately determine how many applications and databases exist. Traffic for the applications is variable. Some applications are batch processes that run at the end of each month.

The solutions architect must gain a better understanding of the portfolio before a migration to AWS can begin.

Which solution will meet these requirements?

Options:

A.  

Use AWS Server Migration Service (AWS SMS) and AWS Database Migration Service (AWS DMS) to evaluate migration. Use AWS Service Catalog to understand application and database dependencies.

B.  

Use AWS Application Migration Service. Run agents on the on-premises infrastructure. Manage the agents by using AWS Migration Hub. Use AWS Storage Gateway to assess local storage needs and database dependencies.

C.  

Use Migration Evaluator to generate a list of servers. Build a report for a business case. Use AWS Migration Hub to view the portfolio. Use AWS Application Discovery Service to gain anunderstanding of application dependencies.

D.  

Use AWS Control Tower in the destination account to generate an application portfolio. Use AWS Server Migration Service (AWS SMS) to generate deeper reports and a business case. Use a landing zone for core accounts and resources.

Discussion 0
Question # 8

A company’s solutions architect is evaluating an AWS workload that was deployed several years ago. The application tier is stateless and runs on a single large Amazon EC2 instance that was launched from an AMI. The application stores data in a MySOL database that runs on a single EC2 instance.

The CPU utilization on the application server EC2 instance often reaches 100% and causes the application to stop responding. The company manually installs patches on the instances. Patching has caused

downtime in the past. The company needs to make the application highly available.

Which solution will meet these requirements with the LEAST development time?

Options:

A.  

Move the application tier to AWS Lambda functions in the existing VP

C.  

Create an Application Load Balancer to distribute traffic across theLambda functbns. Use Amazon GuardDuty to scan the Lambda functions. Migrate the database to Amazon DocumentDB (with MongoDB compatibility).

B.  

Change the EC2 instance type to a smaller Graviton powered instance type. use the existing AMI to create a launch template for an Auto Scaling group. Create an Application Load Balancer to distribute traffic across the instances in the Auto Scaling group. Set the Auto Scaling group to scale based on CPU utilization. Migrate the database to Amazon DynamoD

B.  

C.  

Move the application tier to containers by using Docker. Run the containers on Amazon Elastic Container Service (Amazon ECS) with EC2 instances. Create an Application Load Balancer to distribute traffic across the ECS cluster Configure the ECS cluster to scale based on CPU utilization. Migrate the database to Amazon Neptune.

D.  

Create a new AMI that is configured with AWS Systems Manager Agent (SSM Agent). Use the new AMI to create a launch template for an Auto Scaling group. Use smaller instances in the Auto Scaling group. Create an Application Load Balancer to distribute traffic across the instances in the Auto Scaling group. Set the Auto Scaling group to scale based on CPU utilization. Migrate the database to Amazon Aurora MySQL.

Discussion 0
Question # 9

A company creates an AWS Control Tower landing zone to manage and govern a multi-account AWS environment. The company ' s security team will deploy preventive controls and detective controls to monitor AWS services across all the accounts. The security team needs a centralized view of the security state of all the accounts.

Which solution will meet these requirements ' ?

Options:

A.  

From the AWS Control Tower management account, use AWS CloudFormation StackSets to deploy an AWS Config conformance pack to all accounts in the organization

B.  

Enable Amazon Detective for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Detective

C.  

From the AWS Control Tower management account, deploy an AWS CloudFormation stack set that uses the automatic deployment option to enable Amazon Detective for the organization

D.  

Enable AWS Security Hub for the organization in AWS Organizations Designate one AWS account as the delegated administrator for Security Hub

Discussion 0
Question # 10

A company is running multiple workloads in the AWS Cloud. The company has separate units for software development. The company uses AWS Organizations and federation with SAML to give permissions to developers to manage resources in their AWS accounts. The development units each deploy their production workloads into a common production account.

Recently, an incident occurred in the production account in which members of a development unitterminated an EC2 instance that belonged to a different development unit. A solutions architect must create a solution that prevents a similar incident from happening in the future. The solution also must allow developers the possibility to manage the instances used for their workloads.

Which strategy will meet these requirements?

Options:

A.  

Create separate OUs in AWS Organizations for each development unit. Assign the created OUs to the company AWS accounts. Create separate SCPs with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit name. Assign the SCP to the corresponding OU.

B.  

Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Update the IAM policy for the developers ' assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/ DevelopmentUnit.

C.  

Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Create an SCP with an allow action and a StringEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/DevelopmentUnit. Assign the SCP to the root OU.

D.  

Create separate IAM policies for each development unit. For every IAM policy, add an allow action and a StringEquals condition for the DevelopmentUnit resource tag and the development unit name. During SAML federation, use AWS Security Token Service (AWS STS) to assign the IAM policy and match the development unit name to the assumed IAM role.

Discussion 0
Question # 11

A company is migrating its on-premises IoT platform to AWS. The platform consists of the following components:

A MongoDB cluster as a data store for all collected and processed IoT data.

An application that uses MQTT to connect to IoT devices every 5 minutes to collect data.

An application that runs jobs periodically to generate reports from the IoT data. The jobs take 120–600 seconds to finish running.

A web application that runs on a web server. End users use the web application to generate reports that are accessible to the general public.

The company needs to migrate the platform to AWS to reduce operational overhead while maintaining performance.

Which combination of steps will meet these requirements with the LEAST operational overhead? Select THRE

E.  

Options:

A.  

Create AWS Step Functions state machines with AWS Lambda tasks to prepare the reports and to write the reports to Amazon S3. Configure an Amazon CloudFront distribution that has an S3 origin to serve the reports.

B.  

Create an AWS Lambda function. Program the Lambda function to connect to the IoT devices, process the data, and write the data to the data store. Configure a Lambda layer to temporarily store messages for processing.

C.  

Configure an Amazon EKS cluster with Amazon EC2 instances to prepare the reports. Create an ingress controller on the EKS cluster to serve the reports.

D.  

Connect the IoT devices to AWS IoT Core to publish messages. Create an AWS IoT rule that runs when a message is received. Configure the rule to call an AWS Lambda function. Program the Lambda function to parse, transform, and store device message data to the data store.

E.  

Migrate the MongoDB cluster to Amazon DocumentD

B.  

F.  

Migrate the MongoDB cluster to Amazon EC2 instances.

Discussion 0
Question # 12

A company uses AWS Organizations to manage a multi-account structure. The company has hundreds of AWS accounts and expects the number of accounts to increase. The company is building a new application that uses Docker images. The company will push the Docker images to Amazon Elastic Container Registry (Amazon ECR). Only accounts that are within the company ' s organization should have

access to the images.

The company has a CI/CD process that runs frequently. The company wants to retain all the tagged images. However, the company wants to retain only the five most recent untagged images.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Create a private repository in Amazon ECR. Create a permissions policy for the repository that allows only required ECR operations. Include a condition to allow the ECR operations if the value of the aws:PrincipalOrglD condition key is equal to the ID of the company ' s organization. Add a lifecycle rule to the ECR repository that deletes all untagged images over the count of five.

B.  

Create a public repository in Amazon ECR. Create an IAM role in the ECR account. Set permissions so that any account can assume the role if the value of the aws:PrincipalOrglD condition key is equal to the ID of the company ' s organization. Add a lifecycle rule to the ECR repository that deletes all untagged images over the count of five.

C.  

Create a private repository in Amazon ECR. Create a permissions policy for the repository that includes only required ECR operations. Include a condition to allow the ECR operations for all account IDs in the organization. Schedule a daily Amazon EventBridge rule to invoke an AWS Lambda function that deletes all untagged images over the count of five.

D.  

Create a public repository in Amazon ECR. Configure Amazon ECR to use an interface VPC endpoint with an endpoint policy that includes the required permissions for images that the company needs to pull. Include a condition to allow the ECR operations for all account IDs in the company ' s organization. Schedule a daily Amazon EventBridge rule to invoke an AWS Lambda function that deletes all untagged images over the count of five.

Discussion 0
Question # 13

A company uses multiple software as a service SaaS applications for messaging, email, and file sharing. The SaaS applications are compatible with AWS AppFabric. The company’s web application runs in a VPC on an Amazon EKS cluster and uses Amazon S3 to store data.

The company wants to detect security incidents across the SaaS applications and the web application that could compromise company data. The company needs a centralized solution that provides a dashboard. The dashboard must show the IP addresses, email addresses, and access frequencies of unique users across its SaaS applications and the web application.

Which combination of steps will meet these requirements with the LEAST operational overhead? Select THRE

E.  

Options:

A.  

Ingest audit log data from each SaaS application into AWS AppFabric. Convert the audit log data into Open Cybersecurity Schema Framework OCSF normalized Apache Parquet format. Send the logs to Amazon Data Firehose to be delivered to an Amazon Security Lake S3 bucket.

B.  

Ingest networking and usage log data from each SaaS application into AWS AppFabric. Convert the networking and usage log data into JSON format. Send the logs to Amazon Data Firehose to be delivered to Amazon OpenSearch Service.

C.  

Create an Amazon S3 bucket to receive logs in JSON format through Amazon Data Firehose. Create a dashboard in Amazon CloudWatch. Configure the dashboard to visualize the location of the IP addresses, email addresses, and access frequencies of unique users by using data from the S3 bucket.

D.  

Configure the logs associated with AWS CloudTrail management events, AWS CloudTrail data events for Amazon S3, Amazon EKS audit logs, and VPC Flow Logs as sources in Amazon Security Lake. Add AWS AppFabric as a custom source in Security Lake.

E.  

Configure Amazon Security Lake to send security data from different sources to Amazon Redshift. Use Amazon QuickSight to create a visualization of the security data.

F.  

Configure Amazon Security Lake to send security data from different sources to Amazon OpenSearch Service by using OpenSearch Ingestion. Use the OpenSearch Service dashboard to create a visualization of the security data.

Discussion 0
Question # 14

A company is running an application on Amazon EC2 instances in the AWS Cloud. The application is using a MongoDB database with a replica set as its data tier. The MongoDB database is installed on systems in the company ' s on-premises data center and is accessible through an AWS Direct Connect connection to the data center environment.

A solutions architect must migrate the on-premises MongoDB database to Amazon DocumentDB (with MongoDB compatibility).

Which strategy should the solutions architect choose to perform this migration?

Options:

A.  

Create a fleet of EC2 instances. Install MongoDB Community Edition on the EC2 instances, and create a database. Configure continuous synchronous replication with the database that is running in the on-premises data center.

B.  

Create an AWS Database Migration Service (AWS DMS) replication instance. Create a source endpoint for the on-premises MongoDB database by using change data capture (CDC). Create a target endpoint for the Amazon DocumentDB database. Create and run a DMS migration task.

C.  

Create a data migration pipeline by using AWS Data Pipeline. Define data nodes for the on-premises MongoDB database and the Amazon DocumentDB database. Create a scheduled task to run the data pipeline.

D.  

Create a source endpoint for the on-premises MongoDB database by using AWS Glue crawlers. Configure continuous asynchronous replication between the MongoDB database and the Amazon DocumentDB database.

Discussion 0
Question # 15

A company creates an Amazon API Gateway API and shares the API with an external development team. The API uses AWS Lambda functions and is deployed to a stage that is named Production.

The external development team is the sole consumer of the API. The API experiences sudden increases of usage at specific times, leading to concerns about increased costs. The company needs to limit cost and usage without reworking the Lambda functions.

Which solution will meet these requirements MOST cost-effectivery?

Options:

A.  

Configure the API to send requests to Amazon SQS queues instead of directly to the Lambda functions. Update the Lambda functions to consume messages from the queues and to process the requests. Set up the queues to invoke the Lambda functions when new messages arrive.

B.  

Configure provisioned concurrency for each Lambda function. Use AWS Application Auto Scaling to register the Lambda functions as targets. Set up scaling schedules to increase and decrease capacity to match changes in API usage.

C.  

Create an API Gateway API key and an AWS WAF Regional web ACL. Associate the web ACL with the Production stage. Add a rate-based rule to the web ACL. In the rule, specify the rate limit and a custom request aggregation that uses the X-API-Key header. Share the API key with the external development team.

D.  

Create an API Gateway API key and usage plan. Define throttling limits and quotas in the usage plan. Associate the usage plan with the Production stage and the API key. Share the API key with the external development team.

Discussion 0
Get SAP-C02 dumps and pass your exam in 24 hours!

Free Exams Sample Questions