Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

Associate-Cloud-Engineer Google Cloud Certified - Associate Cloud Engineer is now Stable and With Pass Result | Test Your Knowledge for Free

Associate-Cloud-Engineer Practice Questions

Google Cloud Certified - Associate Cloud Engineer

Last Update 4 days ago
Total Questions : 332

Dive into our fully updated and stable Associate-Cloud-Engineer practice test platform, featuring all the latest Google Cloud Certified exam questions added this week. Our preparation tool is more than just a Google study aid; it's a strategic advantage.

Our free Google Cloud Certified practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about Associate-Cloud-Engineer. Use this test to pinpoint which areas you need to focus your study on.

Associate-Cloud-Engineer PDF

Associate-Cloud-Engineer PDF (Printable)
$43.75
$124.99

Associate-Cloud-Engineer Testing Engine

Associate-Cloud-Engineer PDF (Printable)
$50.75
$144.99

Associate-Cloud-Engineer PDF + Testing Engine

Associate-Cloud-Engineer PDF (Printable)
$63.7
$181.99
Question # 51

You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods. What should you do?

Options:

A.  

Use Binary Authorization and whitelist only the container images used by your customers’ Pods.

B.  

Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.

C.  

Create a GKE node pool with a sandbox type configured to gvisor. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.

D.  

Use the cos_containerd image for your GKE nodes. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.

Discussion 0
Question # 52

You will have several applications running on different Compute Engine instances in the same project. You want to specify at a more granular level the service account each instance uses when calling Google Cloud APIs. What should you do?

Options:

A.  

When creating the instances, specify a Service Account for each instance

B.  

When creating the instances, assign the name of each Service Account as instance metadata

C.  

After starting the instances, use gcloud compute instances update to specify a Service Account for each instance

D.  

After starting the instances, use gcloud compute instances update to assign the name of the relevant Service Account as instance metadata

Discussion 0
Question # 53

You have several hundred microservice applications running in a Google Kubernetes Engine (GKE) cluster. Each microservice is a deployment with resource limits configured for each container in the deployment. You've observed that the resource limits for memory and CPU are not appropriately set for many of the microservices. You want to ensure that each microservice has right sized limits for memory and CPU. What should you do?

Options:

A.  

Modify the cluster's node pool machine type and choose a machine type with more memory and CPU.

B.  

Configure a Horizontal Pod Autoscaler for each microservice.

C.  

Configure GKE cluster autoscaling.

D.  

Configure a Vertical Pod Autoscaler for each microservice.

Discussion 0
Question # 54

You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end. What should you do?

Options:

A.  

Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.

B.  

Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.

C.  

Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.

D.  

Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.

Discussion 0
Question # 55

All development (dev) teams in your organization are located in the United States. Each dev team has its own Google Cloud project. You want to restrict access so that each dev team can only create cloud resources in the United States (US). What should you do?

Options:

A.  

Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations.

B.  

Create an organization to contain all the dev projects. Create an Identity and Access Management (IAM) policy to limit the resources in US regions.

C.  

Create an Identity and Access Management

D.  

Create an Identity and Access Management (IAM)policy to restrict the resources locations in all dev projects. Apply the policy to all dev roles.

Discussion 0
Question # 56

You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed. What should you do?

Options:

A.  

Create a new subnet in the same region as the subnet being used.

B.  

Add an alias IP range to the subnet used by the GKE clusters.

C.  

Create a new VPC, and set up VPC peering with the existing VP

C.  

D.  

Expand the CIDR range of the relevant subnet for the cluster.

Discussion 0
Question # 57

Your company stores data from multiple sources that have different data storage requirements. These data include:

1. Customer data that is structured and read with complex queries

2. Historical log data that is large in volume and accessed infrequently

3. Real-time sensor data with high-velocity writes, which needs to be available for analysis but can tolerate some data loss

You need to design the most cost-effective storage solution that fulfills all data storage requirements. What should you do?

Options:

A.  

Use Spanner for all data.

B.  

Use Cloud SQL for customer data, Cloud Storage (Coldline) for historical logs, and BigQuery for sensor data.

C.  

Use Cloud SQL for customer data, Cloud Storage (Archive) for historical logs, and Bigtable for sensor data.

D.  

Use Firestore for customer data, Cloud Storage (Nearline) for historical logs, and Bigtable for sensor data.

Discussion 0
Question # 58

Your company is running a critical workload on a single Compute Engine VM instance. Your company's disaster recovery policies require you to backup the entire instance's disk data every day. The backups must be retained for 7 days. You must configure a backup solution that complies with your company's security policies and requires minimal setup and configuration. What should you do?

Options:

A.  

Configure the instance to use persistent disk asynchronous replication.

B.  

Configure daily scheduled persistent disk snapshots with a retention period of 7 days.

C.  

Configure Cloud Scheduler to trigger a Cloud Function each day that creates a new machine image and deletes machine images that are older than 7 days.

D.  

Configure a bash script using gsutil to run daily through a cron job. Copy the disk's files to a Cloud Storage bucket with archive storage class and an object lifecycle rule to delete the objects after 7 days.

Discussion 0
Question # 59

You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar environment on GCP. What should you do?

Options:

A.  

When creating the VM, use machine type n1-standard-96.

B.  

When creating the VM, use Intel Skylake as the CPU platform.

C.  

Create the VM using Compute Engine default settings. Use gcloud to modify the running instance to have 96 vCPUs.

D.  

Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations.

Discussion 0
Question # 60

You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to examine the status of your Pod and observe that one of them is still in Pending status:

Question # 60

What is the most likely cause?

Options:

A.  

The pending Pod's resource requests are too large to fit on a single node of the cluster.

B.  

Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.

C.  

The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.

D.  

The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ status. It is currently being rescheduled on a new node.

Discussion 0
Get Associate-Cloud-Engineer dumps and pass your exam in 24 hours!

Free Exams Sample Questions