Pre-Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

SAP-C02 AWS Certified Solutions Architect - Professional is now Stable and With Pass Result | Test Your Knowledge for Free

Exams4sure Dumps

SAP-C02 Practice Questions

AWS Certified Solutions Architect - Professional

Last Update 19 hours ago
Total Questions : 645

Dive into our fully updated and stable SAP-C02 practice test platform, featuring all the latest AWS Certified Professional exam questions added this week. Our preparation tool is more than just a Amazon Web Services study aid; it's a strategic advantage.

Our free AWS Certified Professional practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about SAP-C02. Use this test to pinpoint which areas you need to focus your study on.

SAP-C02 PDF

SAP-C02 PDF (Printable)
$43.75
$124.99

SAP-C02 Testing Engine

SAP-C02 PDF (Printable)
$50.75
$144.99

SAP-C02 PDF + Testing Engine

SAP-C02 PDF (Printable)
$63.7
$181.99
Question # 166

A company processes environment data. The has a set up sensors to provide a continuous stream of data from different areas in a city. The data is available in JSON format.

The company wants to use an AWS solution to send the data to a database that does not require fixed schemas for storage. The data must be send in real time.

Which solution will meet these requirements?

Options:

A.  

Use Amazon Kinesis Data Firehouse to send the data to Amazon Redshift.

B.  

Use Amazon Kinesis Data streams to send the data to Amazon DynamoD

B.  

C.  

Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to send the data to Amazon Aurora.

D.  

Use Amazon Kinesis Data firehouse to send the data to Amazon Keyspaces (for Apache Cassandra).

Discussion 0
Question # 167

A research center is migrating to the AWS Cloud and has moved its on-premises 1 PB object storage to an Amazon S3 bucket. One hundred scientists are using this object storage to store their work-related documents. Each scientist has a personal folder on the object store. All the scientists are members of a single IAMuser group.

The research center ' s compliance officer is worried that scientists will be able to access each other ' s work. The research center has a strict obligation to report on which scientist accesses which documents.

The team that is responsible for these reports has little AWS experience and wants a ready-to-use solution that minimizes operational overhead.

Which combination of actions should a solutions architect take to meet these requirements? (Select TWO.)

Options:

A.  

Create an identity policy that grants the user read and write access. Add a condition that specifies that the S3 paths must be prefixed with ${aws:username}. Apply the policy on the scientists ' IAM user group.

B.  

Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket. Store the trail output in another S3 bucket. Use Amazon Athena to query the logs and generate reports.

C.  

Enable S3 server access logging. Configure another S3 bucket as the target for log delivery. Use Amazon Athena to query the logs and generate reports.

D.  

Create an S3 bucket policy that grants read and write access to users in the scientists ' IAM user group.

E.  

Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket and write the events to Amazon CloudWatch. Use the Amazon Athena CloudWatch connector to query the logs and generate reports.

Discussion 0
Question # 168

Question:

A company is replicating an application in asecondary Region. The application usesDynamoDBandRDS for MySQL. The secondary Region must function independently during adisaster.

Options:

A.  

Use DynamoDB global tables and an RDS read replica.

B.  

Use DAX and a read replica.

C.  

Use global tables and RDS Multi-AZ with standby in secondary Region.

D.  

Use Streams and Lambda to copy data. Use read replica.

Discussion 0
Question # 169

A company is running a critical application that uses an Amazon RDS for MySQL database to store data. The RDS DB instance is deployed in Multi-AZ mode.

A recent RDS database failover test caused a 40-second outage to the application A solutions architect needs to design a solution to reduce the outage time to less than 20 seconds.

Which combination of steps should the solutions architect take to meet these requirements? (Select THRE

E.  

)

Options:

A.  

Use Amazon ElastiCache for Memcached in front of the database

B.  

Use Amazon ElastiCache for Redis in front of the database.

C.  

Use RDS Proxy in front of the database

D.  

Migrate the database to Amazon Aurora MySQL

E.  

Create an Amazon Aurora Replica

F.  

Create an RDS for MySQL read replica

Discussion 0
Question # 170

A company has an application that generates reports and stores them in an Amazon S3 bucket When a user accesses their report, the application generates a signed URL to allow the user to download the report. The company ' s security team has discovered that the files are public and that anyone can download them without authentication The company has suspended the generation of new reports until the problem is resolved.

Which set of actions will immediately remediate the security issue without impacting the application ' s normal workflow?

Options:

A.  

Create an AWS Lambda function that applies a deny all policy for users who are not authenticated. Create a scheduled event to invoke the Lambda function

B.  

Review the AWS Trusted Advisor bucket permissions check and implement the recommended actions.

C.  

Run a script that puts a private ACL on all of the objects in the bucket.

D.  

Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket.

Discussion 0
Question # 171

A finance company is running its business-critical application on current-generation Linux EC2 instances The application includes a self-managed MySQL database performing heavy I/O operations. The application is working fine to handle a moderate amount of traffic during the month. However, it slows down during the final three days of each month due to month-end reporting, even though the company is using Elastic Load Balancers and Auto Scaling within its infrastructure to meet the increased demand.

Which of the following actions would allow the database to handle the month-end load with the LEAST impact on performance?

Options:

A.  

Pre-warming Elastic Load Balancers, using a bigger instance type, changing all Amazon EBS volumes to GP2 volumes.

B.  

Performing a one-time migration of the database cluster to Amazon RDS. and creatingseveral additional read replicas to handle the load during end of month

C.  

Using Amazon CioudWatch with AWS Lambda to change the type. size, or IOPS of Amazon EBS volumes in the cluster based on a specific CloudWatch metric

D.  

Replacing all existing Amazon EBS volumes with new PIOPS volumes that have the maximum available storage size and I/O per second by taking snapshots before the end of the month and reverting back afterwards.

Discussion 0
Question # 172

A company is running an application in the AWS Cloud. The company ' s security team must approve the creation of all new IAM users. When a new IAM user is created, all access for the user must be removed automatically. The security team must then receive a notification to approve the user. The company has a multi-Region AWS CloudTrail trail In the AWS account.

Which combination of steps will meet these requirements? (Select THRE

E.  

)

Options:

A.  

Create an Amazon EventBridge (Amazon CloudWatch Events) rule. Define a pattern with the detail-type value set to AWS API Call via CloudTrail and an eventName of CreateUser.

B.  

Configure CloudTrail to send a notification for the CreateUser event to an Amazon Simple Notification Service (Amazon SNS) topic.

C.  

Invoke a container that runs in Amazon Elastic Container Service (Amazon ECS) with AWS Fargate technology to remove access

D.  

Invoke an AWS Step Functions state machine to remove access.

E.  

Use Amazon Simple Notification Service (Amazon SNS) to notify the security team.

F.  

Use Amazon Pinpoint to notify the security team.

Discussion 0
Question # 173

Question:

A company runs production workloads on EC2 On-Demand Instances and RDS for PostgreSQL. They want to reduce costs without compromising availability or capacity.

Options:

A.  

Use CUR and Lambda to terminate underutilized instances. Buy Savings Plans.

B.  

Use Budgets and Trusted Advisor, then manually terminate and buy RIs.

C.  

UseCompute OptimizerandTrusted Advisorfor recommendations. Apply rightsizing, auto scaling, and purchase a Compute Savings Plan.

D.  

Use Cost Explorer, alerts, and replace with Spot Instances.

Discussion 0
Question # 174

A company owns a chain of travel agencies and is running an application in the AWS Cloud. Company employees use the application to search for information about travel destinations. Destination content is updated four times each year.

Two fixed Amazon EC2 instances serve the application. The company uses an Amazon Route 53 public hosted zone with a multivalue record of travel.example.com that returns the Elastic IP addresses for the EC2 instances. The application uses Amazon DynamoDB as its primary data store. The company uses a self-hosted Redis instance as a caching solution.

During content updates, the load on the EC2 instances and the caching solution increases drastically. This increased load has led to downtime on several occasions. A solutions architect must update the application so that the application is highly available and can handle the load that is generated by the content updates.

Which solution will meet these requirements?

Options:

A.  

Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2 instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the AL

B.  

Update the Route 53 record to use a simple routing policy that targets the ALB ' s DNS alias. Configure scheduled scaling for the EC2 instances before the content updates.

B.  

Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache. Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution ' s DNS alias. Manually scale up EC2 instances before the content updates.

C.  

Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCache Create an Auto Scaling group for the EC2 instances. Create an Application Load Balancer (ALB). Set the Auto Scaling group as a target for the AL

B.  

Update the Route 53 record to use a simple routing policy that targets the ALB ' s DNS alias. Configure scheduled scaling for the application before the content updates.

D.  

Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to use DAX. Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront distribution, and set the Auto Scaling group as an origin for the distribution. Update the Route 53 record to use a simple routing policy that targets the CloudFront distribution ' s DNS alias. Manually scale up EC2 instances before the content updates.

Discussion 0
Question # 175

A company is building an application on AWS. The application sends logs to an Amazon Elasticsearch Service (Amazon ES) cluster for analysis. All data must be stored within a VP

C.  

Some of the company ' s developers work from home. Other developers work from three different company office locations. The developers need to access

Amazon ES to analyze and visualize logs directly from their local development machines.

Which solution will meet these requirements?

Options:

A.  

Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VP

C.  

Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN.

B.  

Create a transit gateway, and connect it to the VP

C.  

Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client.

C.  

Create a transit gateway, and connect it to the VP

C.  

Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection

D.  

Create and configure a bastion host in a public subnet of the VP

C.  

Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH.

Discussion 0
Question # 176

A company has purchased appliances from different vendors. The appliances all have loT sensors. The sensors send status information in the vendors ' proprietary formats to a legacy application that parses the information into JSON. The parsing is simple, but each vendor has a unique format. Once daily, the application parses all the JSON records and stores the records in a relational database for analysis.

The company needs to design a new data analysis solution that can deliver faster and optimize costs.

Which solution will meet these requirements?

Options:

A.  

Connect the loT sensors to AWS loT Core. Set a rule to invoke an AWS Lambda function to parse the information and save a .csv file to Amazon S3. Use AWS Glue to catalog the files. Use Amazon Athena and Amazon OuickSight for analysis.

B.  

Migrate the application server to AWS Fargate, which will receive the information from loT sensors and parse the information into a relational format. Save the parsed information to Amazon Redshift for analysis.

C.  

Create an AWS Transfer for SFTP server. Update the loT sensor code to send the information as a .csv file through SFTP to the server. Use AWS Glue to catalog the files. Use Amazon Athena for analysis.

D.  

Use AWS Snowball Edge to collect data from the loT sensors directly to perform local analysis. Periodically collect the data into Amazon Redshift to perform global analysis.

Discussion 0
Question # 177

A company has an application that uses Amazon EC2 instances in an Auto Scaling group. The quality assurance (QA) department needs to launch and test the application. The application environments are currently launched by the manager of the department using an AWS CloudFormation template. To launch the stack, the manager uses a role with permission to use CloudFormation, EC2, and Auto Scaling APIs. The manager wants to allow QA to launch environments, but does not want to grant broad permissions to each user.

Which set up would achieve these goals?

Options:

A.  

Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to assume the manager ' s role, restricts the permissions to the template and the resources it creates. Train users to launch the template from the CloudFormation console.

B.  

Create an AWS Service Catalog product from the environment template. Add a launch constraint to the product with the existing manager ' s department permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.

C.  

Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department permission to use CloudFormation and restrict the permissions to the template and the resources it creates. Train users to launch the template from the CloudFormation console.

D.  

Create an AWS Elastic Beanstalk application from the environment template. Give users in the QA department permission to use Elastic Beanstalk only. Train users to launch Elastic Beanstalk environments with the Elastic Beanstalk CLI, passing the existing role to the environment.

Discussion 0
Question # 178

A scientific company needs to process text and image data from an Amazon S3 bucket. The data is collected from several radar stations during a live, time-critical phase of a deep space mission. The radar stations upload the data to the source S3 bucket. The data is prefixed by radar station identification number.

The company created a destination S3 bucket in a second account. Data must be copied from the source S3 bucket to the destination S3 bucket to meet a compliance objective. The replication occurs through the use of an S3 replication rule to cover all objects in the source S3 bucket.

One specific radar station is identified as having the most accurate data. Data replication at this radar station must be monitored for completion within 30 minutes after the radar station uploads the objects to the source S3 bucket.

What should a solutions architect do to meet these requirements?

Options:

A.  

Set up an AWS DataSync agent to replicate the prefixed data from the source S3 bucket to the destination S3 bucket. Select to use all available bandwidth on the task, and monitor the task to ensure that it is in the TRANSFERRING status. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.

B.  

In the second account, create another S3 bucket to receive data from the radar station with the most accurate data. Set up a new replication rule for this new S3 bucket to separate the replication from the other radar stations. Monitor the maximum replication time to the destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.

C.  

Enable Amazon S3 Transfer Acceleration on the source S3 bucket, and configure the radar station with the most accurate data to use the new endpoint. Monitor the S3 destination bucket ' s TotalRequestLatency metric. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert if this status changes.

D.  

Create a new S3 replication rule on the source S3 bucket that filters for the keys that use the prefix of the radar station with the most accurate data. Enable S3 Replication Time Control (S3 RTC). Monitor the maximum replication time to the destination. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an alert when the time exceeds the desired threshold.

Discussion 0
Question # 179

A company wants to migrate its website from its on-premises data center to AWS. The website service APIs are hosted on Docker containers in a self-managed container platform. To meet compliance requirements, the company has installed proprietary security agent software on each container platform node. MySQL databases are installed on two separate VMs in a source-replica setup.

A solutions architect must design a solution to migrate the entire website environment to AWS.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Deploy website services as containers on Amazon EC2 instances. For the EC2 instances, use a custom AMI that has the proprietary security agent software and container management software pre-installed. Migrate the MySQL database data to an Amazon Aurora DB cluster by using AWS DMS. Add an additional read replica to the Aurora DB cluster.

B.  

Create an Amazon EKS cluster that is configured with a managed node group. Use a custom AMI that has the proprietary security agent software pre-installed. Deploy website services as Kubernetes pods. Migrate the MySQL database data to an Amazon RDS for MySQL DB instance by using AWS DMS. Configure a Multi-AZ deployment for the RDS DB instance.

C.  

Deploy website services as containers on AWS Lambda functions. Create an Amazon API Gateway API to serve incoming requests to Lambda functions on the backend. Save static content in an Amazon S3 bucket. Migrate the MySQL database data to an Amazon RDS for MySQL DB instance by using AWS DMS. Configure a Multi-AZ deployment for the RDS DB instance.

D.  

Create an Amazon ECS cluster that uses the Fargate launch type. Configure Fargate with a custom AMI that has the proprietary security agent software pre-installed. Deploy website services as ECS tasks. Migrate the MySQL database data to an Amazon Aurora DB cluster by using AWS DMS. Add an additional read replica to the Aurora DB cluster.

Discussion 0
Question # 180

A company has dozens of AWS accounts for different teams, applications, and environments. The company has defined a custom set of controls that all accounts must have. The company is concerned that potential misconfigurations in the accounts could lead to security issues or noncompliance. A solutions architect must design a solution that deploys the custom controls by using infrastructure as code (IaC) in a repeatable way. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Configure AWS Config rules in each account to evaluate the account settings against the custom controls. Define AWS Lambda functions in AWS CloudFormation templates. Program the Lambda functions to remediate noncompliant AWS Config rules. Deploy the CloudFormation templates as stack sets during account creation. Configure the stack sets to invoke the Lambda functions.

B.  

Configure AWS Systems Manager associations to remediate configuration issues across accounts. Define the desired configuration state in an AWS CloudFormation template by using AWS::SSM::Association. Deploy the CloudFormation templates as stack sets to all accounts during account creation.

C.  

Enable AWS Control Tower to set up and govern the multi-account environment. Use blueprints that enforce security best practices. Use Customizations for AWS Control Tower and CloudFormation templates to define the custom controls for each account. Use Amazon EventBridge to deploy Customizations for AWS Control Tower during account-provisioning lifecycle events.

D.  

Enable AWS Security Hub in all the accounts to aggregate findings in a central administrator account. Develop AWS CloudFormation templates to create Amazon EventBridge rules, AWS Lambda functions, and CloudFormation stacks in each account to remediate Security Hub findings. Deploy the CloudFormation stacks during account provisioning to set up the automated remediation.

Discussion 0
Get SAP-C02 dumps and pass your exam in 24 hours!

Free Exams Sample Questions