Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

DOP-C02 AWS Certified DevOps Engineer - Professional is now Stable and With Pass Result | Test Your Knowledge for Free

DOP-C02 Practice Questions

AWS Certified DevOps Engineer - Professional

Last Update 3 days ago
Total Questions : 425

Dive into our fully updated and stable DOP-C02 practice test platform, featuring all the latest AWS Certified Professional exam questions added this week. Our preparation tool is more than just a Amazon Web Services study aid; it's a strategic advantage.

Our free AWS Certified Professional practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about DOP-C02. Use this test to pinpoint which areas you need to focus your study on.

DOP-C02 PDF

DOP-C02 PDF (Printable)
$43.75
$124.99

DOP-C02 Testing Engine

DOP-C02 PDF (Printable)
$50.75
$144.99

DOP-C02 PDF + Testing Engine

DOP-C02 PDF (Printable)
$63.7
$181.99
Question # 11

A company requires that its internally facing web application be highly available. The architecture is made up of one Amazon EC2 web server instance and one NAT instance that provides outbound internet access for updates and accessing public data.

Which combination of architecture adjustments should the company implement to achieve high availability? (Choose two.)

Options:

A.  

Add the NAT instance to an EC2 Auto Scaling group that spans multiple Availability Zones. Update the route tables.

B.  

Create additional EC2 instances spanning multiple Availability Zones. Add an Application Load Balancer to split the load between them.

C.  

Configure an Application Load Balancer in front of the EC2 instance. Configure Amazon CloudWatch alarms to recover the EC2 instance upon host failure.

D.  

Replace the NAT instance with a NAT gateway in each Availability Zone. Update the route tables.

E.  

Replace the NAT instance with a NAT gateway that spans multiple Availability Zones. Update the route tables.

Discussion 0
Question # 12

A company uses an AWS CodeCommit repository to store its source code and corresponding unit tests. The company has configured an AWS CodePipeline pipeline that includes an AWS CodeBuild project that runs when code is merged to the main branch of the repository.

The company wants the CodeBuild project to run the unit tests. If the unit tests pass, the CodeBuild project must tag the most recent commit.

How should the company configure the CodeBuild project to meet these requirements?

Options:

A.  

Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use native Git to create a tag and to push the Git tag to the repository if the code passes the unit tests.

B.  

Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use AWS CLI commands to create a new repository tag in the repository if the code passes the unit tests.

C.  

Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project lo run the unit tests. Configure the project to use AWS CLI commands to create a new Git tag in the repository if the code passes the unit tests.

D.  

Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use AWS CLI commands to create a new repository tag in the repository if the code passes the unit tests.

Discussion 0
Question # 13

A company is developing a web application and is using AWS CodeBuild for its CI/CD pipeline. The company must generate multiple artifacts from a single build process. The company also needs the ability to determine which build generated each artifact. The artifacts must be stored in an Amazon S3 bucket for further processing and deployment. Builds occur frequently and are based on a large Git repository. The company needs to optimize build times. Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.  

Configure the buildspec.yml file to specify multiple artifacts with different file sets. Enable local caching for the build process by using source cache mode. Use environment variables to dynamically name artifacts based on the build I

D.  

B.  

Configure the buildspec.yml file to output all files as a single artifact. Enable local caching for the build process by using custom cache mode. Create an AWS Lambda function that is invoked by CodeBuild completion. Program the Lambda function to split the artifact into multiple files and to upload the files to the S3 bucket with dynamic names based on build I

D.  

C.  

Create separate CodeBuild projects for each artifact type. Enable local caching for the build process by using Docker layer cache mode. Configure each project to output a single artifact to the S3 bucket with a dynamic name based on build I

D.  

Use AWS Step Functions to orchestrate the projects in parallel.

D.  

Set up CodeBuild to generate a single ZIP artifact that contains all files. Enable S3 caching for the build process. Use AWS CodePipeline with a custom action to extract the files and reorganize the files into multiple artifacts in the S3 bucket. Configure the custom action to dynamically name the files based on the time of the build.

Discussion 0
Question # 14

A company sends its AWS Network Firewall flow logs to an Amazon S3 bucket. The company then analyzes the flow logs by using Amazon Athena. The company needs to transform the flow logs and add additional data before the flow logs are delivered to the existing S3 bucket. Which solution will meet these requirements?

Options:

A.  

Create an AWS Lambda function to transform the data and to write a new object to the existing S3 bucket. Configure the Lambda function with an S3 trigger for the existing S3 bucket. Specify all object create events for the event type. Acknowledge the recursive invocation.

B.  

Enable Amazon EventBridge notifications on the existing S3 bucket. Create a custom EventBridge event bus. Create an EventBridge rule that is associated with the custom event bus. Configure the rule to react to all object create events for the existing S3 bucket and to invoke an AWS Step Functions workflow. Configure a Step Functions task to transform the data and to write the data into a new S3 bucket.

C.  

Create an Amazon EventBridge rule that is associated with the default EventBridge event bus. Configure the rule to react to all object create events for the existing S3 bucket. Define a new S3 bucket as the target for the rule. Create an EventBridge input transformation to customize the event before passing the event to the rule target.

D.  

Create an Amazon Data Firehose delivery stream that is configured with an AWS Lambda transformer. Specify the existing S3 bucket as the destination. Change the Network Firewall logging destination from Amazon S3 to Firehose.

Discussion 0
Question # 15

A company runs a microservices application on Amazon Elastic Kubernetes Service (Amazon EKS). Users recently reported significant delays while accessing an account summary feature, particularly during peak business hours.

A DevOps engineer used Amazon CloudWatch metrics and logs to troubleshoot the issue. The logs indicated normal CPU and memory utilization on the EKS nodes. The DevOps engineer was not able to identify where the delays occurred within the microservices architecture.

The DevOps engineer needs to increase the observability of the application to pinpoint where the delays are occurring.

Which solution will meet these requirements?

Options:

A.  

Deploy the AWS X-Ray daemon as a DaemonSet in the EKS cluster. Use the X-Ray SDK to instrument the application code. Redeploy the application.

B.  

Enable CloudWatch Container Insights for the EKS cluster. Use the Container Insights data to diagnose the delays.

C.  

Create alarms based on the existing CloudWatch metrics. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send email alerts.

D.  

Increase the timeout settings in the application code for network operations to allow more time for operations to finish.

Discussion 0
Question # 16

A DevOps engineer manages an AWS CodePipeline pipeline that builds and deploys a web application on AWS. The pipeline has a source stage, a build stage, and a deploy stage. When deployed properly, the web application responds with a 200 OK HTTP response code when the URL of the home page is requested. The home page recently returned a 503 HTTP response code after CodePipeline deployed the application. The DevOps engineer needs to add an automated test into the pipeline. The automated test must ensure that the application returns a 200 OK HTTP response code after the application is deployed. The pipeline must fail if the response code is not present during the test. The DevOps engineer has added a CheckURL stage after the deploy stage in the pipeline. What should the DevOps engineer do next to implement the automated test?

Options:

A.  

Configure the CheckURL stage to use an Amazon CloudWatch action. Configure the action to use a canary synthetic monitoring check on the application URL and to report a success or failure to CodePipeline.

B.  

Create an AWS Lambda function to check the response code status of the URL and to report a success or failure to CodePipeline. Configure an action in the CheckURL stage to invoke the Lambda function.

C.  

Configure the CheckURL stage to use an AWS CodeDeploy action. Configure the action with an input artifact that is the URL of the application and to report a success or failure to CodePipeline.

D.  

Deploy an Amazon API Gateway HTTP API that checks the response code status of the URL and that reports success or failure to CodePipeline. Configure the CheckURL stage to use the AWS Device Farm test action and to provide the API Gateway HTTP API as an input artifact.

Discussion 0
Question # 17

A development team wants to use AWS CloudFormation stacks to deploy an application. However, the developer IAM role does not have the required permissions to provision the resources that are specified in the AWS CloudFormation template. A DevOps engineer needs to implement a solution that allows the developers to deploy the stacks. The solution must follow the principle of least privilege.

Which solution will meet these requirements?

Options:

A.  

Create an IAM policy that allows the developers to provision the required resources. Attach the policy to the developer IAM role.

B.  

Create an IAM policy that allows full access to AWS CloudFormation. Attach the policy to the developer IAM role.

C.  

Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role a cloudformation:* action. Use the new service role during stack deployments.

D.  

Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role the iam:PassRole permission. Use the new service role during stack deployments.

Discussion 0
Question # 18

A company uses AWS Storage Gateway in file gateway mode in front of an Amazon S3 bucket that is used by multiple resources. In the morning when business begins, users do not see the objects processed by a third party the previous evening. When a DevOps engineer looks directly at the S3 bucket, the data is there, but it is missing in Storage Gateway.

Which solution ensures that all the updated third-party files are available in the morning?

Options:

A.  

Configure a nightly Amazon EventBridge event to invoke an AWS Lambda function to run the RefreshCache command for Storage Gateway.

B.  

Instruct the third party to put data into the S3 bucket using AWS Transfer for SFTP.

C.  

Modify Storage Gateway to run in volume gateway mode.

D.  

Use S3 Same-Region Replication to replicate any changes made directly in the S3 bucket to Storage Gateway.

Discussion 0
Question # 19

A company has implemented a new microservices-based application on an Amazon Elastic Container Service (Amazon ECS) cluster. After each deployment, the company wants to validate the critical user journeys and API endpoints before routing traffic to the new application version.

The company must implement an automated solution to detect issues in the new deployment and to initiate a rollback if necessary.

Which solution will meet these requirements with the LEAST operational overhead ?

Options:

A.  

Set up Amazon CloudWatch Application Insights for the ECS cluster. Create an Amazon EventBridge rule to invoke an AWS Lambda function to analyze the task states. Program the Lambda function to use the ECS UpdateService API call to initiate a rollback if a specific percentage of tasks fail.

B.  

Set up Amazon CloudWatch Application Insights for the ECS cluster. Configure Application Insights to monitor key performance indicators of the microservices in the critical user journeys and API calls. Create CloudWatch alarms based on the insights. Use EventBridge to invoke an AWS Step Functions workflow to evaluate the alarms. Configure the workflow to initiate a rollback if necessary by using the alarms ' built-in integration w

C.  

Create CloudWatch Synthetics canaries that simulate critical user journeys and API calls. Implement AWS X-Ray tracing for all the microservices. Configure X-Ray to send traces to CloudWatch. Create CloudWatch alarms based on error rates and latency metrics. Create a Lambda function to analyze the traces and to initiate a rollback if necessary by using the alarms ' built-in integration with Amazon ECS.

D.  

Create CloudWatch Synthetics canaries that simulate critical user journeys and API calls. Configure the canaries to run against the new deployment. Create CloudWatch alarms that are invoked when canaries fail. Use the alarms ' built-in integration with Amazon ECS to initiate a rollback if the alarms are invoked before traffic is routed to the new deployment.

Discussion 0
Question # 20

A company has an RPO of 24 hours and an RTO of 10 minutes for a critical web application that runs on Amazon EC2 instances. The company uses AWS Organizations to manage its AWS account. The company wants to set up AWS Backup for its AWS environment.

A DevOps engineer configures AWS Organizations for AWS Backup. The DevOps engineer creates a new centralized AWS account to store the backups. Each EC2 instance has four Amazon Elastic Block Store (Amazon EBS) volumes attached.

Which solution will meet this requirement MOST securely?

Options:

A.  

Create encrypted backup vaults and customer managed AWS KMS keys in both accounts. Configure AWS Backup to create full EC2 backups as AMIs. Copy the backups to the centralized vault.

B.  

Create encrypted vaults in both accounts by using the source account ' s AWS KMS key. Configure AWS Backup to create EC2 AMIs. Copy the AMIs to the centralized vault.

C.  

Create backup vaults in both accounts. Use AWS managed keys for encryption. Configure AWS Backup to create EC2 AMIs. Copy the AMIs to the centralized vault.

D.  

Create encrypted vaults in both accounts. Use a customer managed KMS key in the source account. Use an AWS managed key in the centralized account. Configure AWS Backup to create EC2 AMIs. Copy the AMIs to the centralized vault.

Discussion 0
Get DOP-C02 dumps and pass your exam in 24 hours!

Free Exams Sample Questions