Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

Professional-Cloud-DevOps-Engineer Google Cloud Certified - Professional Cloud DevOps Engineer Exam is now Stable and With Pass Result | Test Your Knowledge for Free

Exams4sure Dumps

Professional-Cloud-DevOps-Engineer Practice Questions

Google Cloud Certified - Professional Cloud DevOps Engineer Exam

Last Update 2 days ago
Total Questions : 201

Dive into our fully updated and stable Professional-Cloud-DevOps-Engineer practice test platform, featuring all the latest Cloud DevOps Engineer exam questions added this week. Our preparation tool is more than just a Google study aid; it's a strategic advantage.

Our free Cloud DevOps Engineer practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about Professional-Cloud-DevOps-Engineer. Use this test to pinpoint which areas you need to focus your study on.

Professional-Cloud-DevOps-Engineer PDF

Professional-Cloud-DevOps-Engineer PDF (Printable)
$43.75
$124.99

Professional-Cloud-DevOps-Engineer Testing Engine

Professional-Cloud-DevOps-Engineer PDF (Printable)
$50.75
$144.99

Professional-Cloud-DevOps-Engineer PDF + Testing Engine

Professional-Cloud-DevOps-Engineer PDF (Printable)
$63.7
$181.99
Question # 11

You are responding to a high-priority incident where a critical, user-facing payment service is experiencing a 50% error rate. The cause is a non-critical, batch analytics Dataflow pipeline flooding a shared Memorystore for Redis instance with writes, which has spiked read latency for the payment service. A full rollback of the Dataflow pipeline's deployment will take 15 minutes to complete through your CI/CD process. You need to restore the payment service as quickly as possible. What should you do?

Options:

A.  

Use Cloud Profiler to inspect the Dataflow pipeline's execution graph to pinpoint the source of the excessive writes.

B.  

In the Google Cloud console, edit the Memorystore for Redis instance and increase its capacity tier.

C.  

Initiate an automated rollback of the Dataflow pipeline's deployment to revert to the last stable version.

D.  

Cancel the active Dataflow job.

Discussion 0
Question # 12

Your team is building a service that performs compute-heavy processing on batches of data The data is processed faster based on the speed and number of CPUs on the machine These batches of data vary in size and may arrive at any time from multiple third-party sources You need to ensure that third partiesare able to upload their data securely. You want to minimize costs while ensuring that the data is processed as quickly as possible What should you do?

Options:

A.  

• Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that thirdparties can upload batches of data and provide appropriate credentials to the server• Create a Cloud Function with a google.storage, object, finalize Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group• Use an image pre-loaded with the data processing software that terminates th

B.  

• Provide a Cloud Storage bucket so that third parties can upload batches of data, and provideappropriate Identity and Access Management (1AM) access to the bucket• Use a standard Google Kubernetes Engine (GKE) cluster and maintain two services one that processes the batches of data and one that monitors Cloud Storage for new batches of data• Stop the processing service when there are no batches of data to process

C.  

• Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate identity and Access Management (1AM) access to the bucket• Create a Cloud Function with a google, storage, object .finalise Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group• Use an image pre-loaded with the data processing software that terminates the instances when

D.  

• Provide a Cloud Storage bucket so that third parties can upload batches of data, and provideappropriate Identity and Access Management (1AM) access to the bucket• Use Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Function that processes the data• Set a Cloud Function to use the largest CPU possible to minimize the runtime of the processing

Discussion 0
Question # 13

You are managing an application that runs in Compute Engine The application uses a custom HTTP server to expose an API that is accessed by other applications through an internal TCP/UDP load balancer A firewall rule allows access to the API port from 0.0.0-0/0. You need to configure Cloud Logging to log each IP address that accesses the API by using the fewest number of steps What should you do Bret?

Options:

A.  

Enable Packet Mirroring on the VPC

B.  

Install the Ops Agent on the Compute Engine instances.

C.  

Enable logging on the firewall rule

D.  

Enable VPC Flow Logs on the subnet

Discussion 0
Question # 14

You are working with a government agency that requires you to archive application logs for seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of storage. What should you do?

Options:

A.  

Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.

B.  

Develop an App Engine application that pulls the logs from Stackdriver and saves them in BigQuery.

C.  

Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage for seven years.

D.  

Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs, and then select the bucket as the log export destination.

Discussion 0
Question # 15

You are performing a semi-annual capacity planning exercise for your flagship service You expect a service user growth rate of 10% month-over-month for the next six months Your service is fully containerized and runs on a Google Kubemetes Engine (GKE) standard cluster across three zones with cluster autoscaling enabled You currently consume about 30% of your total deployed CPU capacity and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth o' as a result of zone failure while you avoid unnecessary costs How should you prepare to handle the predicted growth?

Options:

A.  

Verify the maximum node pool size enable a Horizontal Pod Autoscaler and then perform a load lest to verify your expected resource needs

B.  

Because you deployed the service on GKE and are using a cluster autoscaler your GKE cluster will scale automatically regardless of growth rate

C.  

Because you are only using 30% of deployed CPU capacity there is significant headroom and you do not need to add any additional capacity for this rate of growth

D.  

Proactively add 80% more node capacity to account for six months of 10% growth rate and then perform a load test to ensure that you have enough capacity

Discussion 0
Question # 16

You are currently planning how to display Cloud Monitoring metrics for your organization's Google Cloud projects. Your organization has three folders and six projects:

Question # 16

You want to configure Cloud Monitoring dashboards lo only display metrics from the projects within one folder You need to ensure that the dashboards do not display metrics from projects in the other folders You want to follow Google-recommended practices What should you do?

Options:

A.  

Create a single new scoping project

B.  

Create new scoping projects for each folder

C.  

Use the current app-one-prod project as the scoping project

D.  

Use the current app-one-dev, app-one-staging and app-one-prod projects as the scoping project for each folder

Discussion 0
Question # 17

Your team is preparing to launch a new API in Cloud Run. The API uses an OpenTelemetry agent to send distributed tracing data to Cloud Trace to monitor the time each request takes. The team has noticed inconsistent trace collection. You need to resolve the issue. What should you do?

Options:

A.  

Increase the CPU limit in Cloud Run from 2 to 4.

B.  

Use an HTTP health check.

C.  

Configure CPU to be allocated only during request processing.

D.  

Configure CPU to be always-allocated.

Discussion 0
Question # 18

Your team is running microservices in Google Kubernetes Engine (GKE) You want to detect consumption of an error budget to protect customers and define release policies What should you do?

Options:

A.  

Create SLIs from metrics Enable Alert Policies if the services do not pass

B.  

Use the metrics from Anthos Service Mesh to measure the health of the microservices

C.  

Create a SLO Create an Alert Policy on select_slo_bum_rate

D.  

Create a SLO and configure uptime checks for your services Enable Alert Policies if the services do not pass

Discussion 0
Question # 19

Your company runs an e-commerce business. The application responsible for payment processing has structured JSON logging with the following schema:

Capture and access of logs from the payment processing application is mandatory for operations, but the jsonPayload.user_email field contains personally identifiable information (PII). Your security team does not want the entire engineering team to have access to PII. You need to stop exposing PII to the engineering team and restrict access to security team members only. What should you do?

Options:

A.  

Apply a jsonPayload.user_email exclusion filter to the _Default bucket.

B.  

Apply the conditional role binding resource.name.extract("locations/global/buckets/(bucket)/") == "_Default" to the _Default bucket.

C.  

Apply a jsonPayload.user_email restricted field to the _Default bucket. Grant the Log Field Accessor role to the security team members.

D.  

Modify the application to toggle inclusion of user_email when the log_user_email environment variable is set to true. Restrict the engineering team members who can change the production environment variable by using the CODEOWNERS file.

Discussion 0
Question # 20

You are creating a CI/CD pipeline in Cloud Build to build an application container image The application code is stored in GitHub Your company requires thai production image builds are only run against the main branch and that the change control team approves all pushes to the main branch You want the image build to be as automated as possible What should you do?

Choose 2 answers

Options:

A.  

Create a trigger on the Cloud Build job Set the repository event setting to Pull request'

B.  

Add the owners file to the Included files filter on the trigger

C.  

Create a trigger on the Cloud Build job Set the repository event setting to Push to a branch

D.  

Configure a branch protection rule for the main branch on the repository

E.  

Enable the Approval option on the trigger

Discussion 0
Get Professional-Cloud-DevOps-Engineer dumps and pass your exam in 24 hours!

Free Exams Sample Questions