Labour Day Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 2493360325

Good News !!! Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) is now Stable and With Pass Result

Professional-Cloud-Architect Practice Exam Questions and Answers

Google Certified Professional - Cloud Architect (GCP)

Last Update 20 hours ago
Total Questions : 275

Professional-Cloud-Architect is stable now with all latest exam questions are added 20 hours ago. Just download our Full package and start your journey with Google Certified Professional - Cloud Architect (GCP) certification. All these Google Professional-Cloud-Architect practice exam questions are real and verified by our Experts in the related industry fields.

Professional-Cloud-Architect PDF

Professional-Cloud-Architect PDF (Printable)
$48
$119.99

Professional-Cloud-Architect Testing Engine

Professional-Cloud-Architect PDF (Printable)
$56
$139.99

Professional-Cloud-Architect PDF + Testing Engine

Professional-Cloud-Architect PDF (Printable)
$70.8
$176.99
Question # 1

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

Options:

A.  

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.  

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.  

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.  

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Discussion 0
Question # 2

For this question, refer to the EHR Healthcare case study. EHR has single Dedicated Interconnect

connection between their primary data center and Googles network. This connection satisfies

EHR’s network and security policies:

• On-premises servers without public IP addresses need to connect to cloud resources

without public IP addresses

• Traffic flows from production network mgmt. servers to Compute Engine virtual

machines should never traverse the public internet.

You need to upgrade the EHR connection to comply with their requirements. The new

connection design must support business critical needs and meet the same network and

security policy requirements. What should you do?

Options:

A.  

Add a new Dedicated Interconnect connection

B.  

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G

C.  

Add three new Cloud VPN connections

D.  

Add a new Carrier Peering connection

Discussion 0
Question # 3

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

Options:

A.  

Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.

B.  

Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.

C.  

Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.

D.  

Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

Discussion 0
Question # 4

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in

Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.

Considering Dress4Win’s business and technical requirements, what should you do?

Options:

A.  

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.

B.  

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Enable default storage encryption before storing files in Cloud Storage.

C.  

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.

Utilize Google’s default encryption at rest when storing files in Cloud Storage.

D.  

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Discussion 0
Question # 5

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

Options:

A.  

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.  

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.  

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.  

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Discussion 0
Question # 6

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.

What should you do?

Options:

A.  

Use Stackdriver Trace to create a trace list analysis.

B.  

Use Stackdriver Monitoring to create a dashboard on the project’s activity.

C.  

Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.

D.  

Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.

Discussion 0
Question # 7

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for performance in the cloud?

Options:

A.  

Web applications deployed using App Engine standard environment

B.  

RabbitMQ deployed using an unmanaged instance group

C.  

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode

D.  

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Discussion 0
Question # 8

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

Options:

A.  

Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.

B.  

Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.

C.  

Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.

D.  

Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Discussion 0
Question # 9

For this question, refer to the TerramEarth case study.

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections. What should you do?

Options:

A.  

Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket.

B.  

Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.

C.  

Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.

D.  

Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.

Discussion 0
Question # 10

For this question, refer to the TerramEarth case study.

TerramEarth's CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the development team to focus their failure. You want to allow analysts to centrally query the vehicle data. Which architecture should you recommend?

A)

Question # 10

B)

Question # 10

C)

Question # 10

D)

Question # 10

Options:

A.  

Option A

B.  

Option B

C.  

Option C

D.  

Option D

Discussion 0
Question # 11

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team

releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a

repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released

every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

Options:

A.  

Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B.  

Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.

C.  

Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

D.  

Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Discussion 0
Question # 12

For this question, refer to the TerramEarth case study.

TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an hour. How should you design the data ingestion?

Options:

A.  

Vehicles write data directly to GCS.

B.  

Vehicles write data directly to Google Cloud Pub/Sub.

C.  

Vehicles stream data directly to Google BigQuery.

D.  

Vehicles continue to write data using the existing system (FTP).

Discussion 0
Question # 13

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective

approach for storing their race data such as telemetry. They want to keep all historical records, train models

using only the previous season's data, and plan for data growth in terms of volume and information collected.

You need to propose a data solution. Considering HRL business requirements and the goals expressed by

CEO S. Hawke, what should you do?

Options:

A.  

Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data

by season and event.

B.  

Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data

using season as a primary key.

C.  

Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on

season.

D.  

Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use

separate database instances for each season.

Discussion 0
Question # 14

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

Options:

A.  

glouc compute firewall rules update hlr-policy \

--priority 1000 \

target tags-sourceiplist fastly \

--allow tcp:443

B.  

gcloud compute security policies rules update 1000 \

--security-policy hlr-policy \

--expression "evaluatePreconfiguredExpr('sourceiplist-fastly')" \

--action " allow"

C.  

gcloud compute firewall rules update

sourceiplist-fastly \

priority 1000 \

allow tcp: 443

D.  

gcloud compute priority-policies rules update

1000 \

security policy from fastly

--src- ip-ranges"

-- action " allow"

Discussion 0
Question # 15

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

Options:

A.  

Use Explainable AI.

B.  

Use Vision AI.

C.  

Use Google Cloud’s operations suite.

D.  

Use Jupyter Notebooks.

Discussion 0
Question # 16

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

Options:

A.  

Error rates for requests from Asia

B.  

Latency difference between US and Asia

C.  

Total visits, error rates, and latency from Asia

D.  

Total visits and average latency for users in Asia

E.  

The number of character sets present in the database

Discussion 0
Question # 17

For this question, refer to the JencoMart case study.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources. What Google domain and project structure should you recommend?

Options:

A.  

Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.

B.  

Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.

C.  

Create a single G Suite account to manage users with each stage of each application in its own project.

D.  

Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Discussion 0
Question # 18

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

Options:

A.  

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.  

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.  

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.  

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Discussion 0
Question # 19

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

Question # 19

Options:

A.  

A single VPN tunnel, which limits throughput

B.  

A tier of Google Cloud Storage that is not suited for this task

C.  

A copy command that is not suited to operate over long distances

D.  

Fewer virtual machines (VMs) in GCP than on-premises machines

E.  

A separate storage layer outside the VMs, which is not suited for this task

F.  

Complicated internet connectivity between the on-premises infrastructure and GCP

Discussion 0
Question # 20

You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage. What should you do?

Options:

A.  

Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.

B.  

Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster.

C.  

Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.

D.  

Deploy the application on two Compute Engine instance groups, each in separate project and a different region. Use the first instance group to server traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.

Discussion 0
Question # 21

You need to deploy a stateful workload on Google Cloud. The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do?

Options:

A.  

Use a persistent disk for each instance.

B.  

Use a regional persistent disk for each instance.

C.  

Create a Cloud Filestore instance and mount it in each instance.

D.  

Create a Cloud Storage bucket and mount it in each instance using gcsfuse.

Discussion 0
Question # 22

Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load. The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again as quickly as possible. Which action should you recommend?

Options:

A.  

Change the autoscaling metric to agent.googleapis.com/memory/percent_used.

B.  

Restart the affected instances on a staggered schedule.

C.  

SSH to each instance and restart the application process.

D.  

Increase the maximum number of instances in the autoscaling group.

Discussion 0
Question # 23

A development manager is building a new application He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must

1. Be based on open-source technology for cloud portability

2. Dynamically scale compute capacity based on demand

3. Support continuous software delivery

4. Run multiple segregated copies of the same application stack

5. Deploy application bundles using dynamic templates

6. Route network traffic to specific services based on URL

Which combination of technologies will meet all of his requirements?

Options:

A.  

Google Container Engine, Jenkins, and Helm

B.  

Google Container Engine and Cloud Load Balancing

C.  

Google Compute Engine and Cloud Deployment Manager

D.  

Google Compute Engine, Jenkins, and Cloud Load Balancing

Discussion 0
Question # 24

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What

should you do?

Options:

A.  

Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects

B.  

Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs

C.  

Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.

D.  

Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in both projects

Discussion 0
Question # 25

For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the backend over a REST API.

You want to follow Google-recommended practices. How should you design the backend?

Options:

A.  

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.

B.  

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.

C.  

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.

D.  

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.

Discussion 0
Question # 26

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform’s resilience to changes in mobile network latency. What should you do?

Options:

A.  

Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.

B.  

Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.

C.  

Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.

D.  

Create an opt-in beta of the game that runs on players' mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.

Discussion 0
Question # 27

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.

Which two steps should be part of their migration plan? (Choose two.)

Options:

A.  

Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.

B.  

Write a schema migration plan to denormalize data for better performance in BigQuery.

C.  

Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.

D.  

Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.

E.  

Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

Discussion 0
Question # 28

The current Dress4win system architecture has high latency to some customers because it is located in one

data center.

As of a future evaluation and optimizing for performance in the cloud, Dresss4win wants to distribute it's system

architecture to multiple locations when Google cloud platform.

Which approach should they use?

Options:

A.  

Use regional managed instance groups and a global load balancer to increase performance because the

regional managed instance group can grow instances in each region separately based on traffic.

B.  

Use a global load balancer with a set of virtual machines that forward the requests to a closer group of

virtual machines managed by your operations team.

C.  

Use regional managed instance groups and a global load balancer to increase reliability by providing

automatic failover between zones in different regions.

D.  

Use a global load balancer with a set of virtual machines that forward the requests to a closer group of

virtual machines as part of a separate managed instance groups.

Discussion 0
Question # 29

For this question, refer to the Dress4Win case study.

As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in. Which configuration should Dress4Win use?

Options:

A.  

Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.

B.  

Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer's unique I

D.  

C.  

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring privacy of images.

D.  

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to their image files.

Discussion 0
Question # 30

For this question, refer to the Dress4Win case study.

Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?

Options:

A.  

They should enable Google Stackdriver Debugger on the application code to show errors in the code.

B.  

They should add additional unit tests and production scale load tests on their cloud staging environment.

C.  

They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.

D.  

They should add canary tests so developers can measure how much of an impact the new release causes to latency.

Discussion 0
Question # 31

Dress4win has end to end tests covering 100% of their endpoints.

They want to ensure that the move of cloud does not introduce any new bugs.

Which additional testing methods should the developers employ to prevent an outage?

Options:

A.  

They should run the end to end tests in the cloud staging environment to determine if the code is working as

intended.

B.  

They should enable google stack driver debugger on the application code to show errors in the code

C.  

They should add additional unit tests and production scale load tests on their cloud staging environment.

D.  

They should add canary tests so developers can measure how much of an impact the new release causes to latency

Discussion 0
Question # 32

You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

Options:

A.  

Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.

B.  

Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load Balancing instance.

C.  

Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.

D.  

Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.

Discussion 0
Question # 33

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the

ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow

Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

Options:

A.  

Google Kubernetes Engine with an SSL Ingress

B.  

Cloud IoT Core with public/private key pairs

C.  

Compute Engine with project-wide SSH keys

D.  

Compute Engine with specific SSH keys

Discussion 0
Question # 34

You are migrating a Linux-based application from your private data center to Google Cloud. The TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration. What should you do?

Options:

A.  

Open a support case regarding the CVE and chat with the support engineer.

B.  

Read the CVEs from the Google Cloud Status Dashboard to understand the impact.

C.  

Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact

D.  

Post a question regarding the CVE in Stack Overflow to get an explanation

E.  

Post a question regarding the CVE in a Google Cloud discussion group to get an explanation

Discussion 0
Question # 35

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?

Options:

A.  

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

B.  

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.

C.  

Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

D.  

Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.

Discussion 0
Question # 36

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

Options:

A.  

Verify that the database is online.

B.  

Verify that the project quota hasn't been exceeded.

C.  

Verify that the new feature code did not introduce any performance bugs.

D.  

Verify that the load-testing team is not running their tool against production.

Discussion 0
Question # 37

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

Options:

A.  

Tests should scale well beyond the prior approaches.

B.  

Unit tests are no longer required, only end-to-end tests.

C.  

Tests should be applied after the release is in the production environment.

D.  

Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Discussion 0
Question # 38

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

Options:

A.  

Use a private cluster with a private endpoint with master authorized networks configured.

B.  

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.  

Use a private cluster with a public endpoint with master authorized networks configured.

D.  

Use a public cluster with master authorized networks enabled and firewall rules.

Discussion 0
Question # 39

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:

• Services are deployed redundantly across multiple regions in the US and Europe.

• Only frontend services are exposed on the public internet.

• They can provide a single frontend IP for their fleet of services.

• Deployment artifacts are immutable.

Which set of products should they use?

Options:

A.  

Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine

B.  

Google Cloud Storage, Google App Engine, Google Network Load Balancer

C.  

Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

D.  

Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Discussion 0
Question # 40

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

Options:

A.  

Container Engine, Cloud Pub/Sub, and Cloud SQL

B.  

Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

C.  

Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D.  

Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow

E.  

Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Discussion 0
Get Professional-Cloud-Architect dumps and pass your exam in 24 hours!

Free Exams Sample Questions