Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

Associate-Data-Practitioner Google Cloud Associate Data Practitioner (ADP Exam) is now Stable and With Pass Result | Test Your Knowledge for Free

Exams4sure Dumps

Associate-Data-Practitioner Practice Questions

Google Cloud Associate Data Practitioner (ADP Exam)

Last Update 1 day ago
Total Questions : 106

Dive into our fully updated and stable Associate-Data-Practitioner practice test platform, featuring all the latest Google Cloud Platform exam questions added this week. Our preparation tool is more than just a Google study aid; it's a strategic advantage.

Our free Google Cloud Platform practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about Associate-Data-Practitioner. Use this test to pinpoint which areas you need to focus your study on.

Associate-Data-Practitioner PDF

Associate-Data-Practitioner PDF (Printable)
$43.75
$124.99

Associate-Data-Practitioner Testing Engine

Associate-Data-Practitioner PDF (Printable)
$50.75
$144.99

Associate-Data-Practitioner PDF + Testing Engine

Associate-Data-Practitioner PDF (Printable)
$63.7
$181.99
Question # 11

You work for an online retail company. Your company collects customer purchase data in CSV files and pushes them to Cloud Storage every 10 minutes. The data needs to be transformed and loaded into BigQuery for analysis. The transformation involves cleaning the data, removing duplicates, and enriching it with product information from a separate table in BigQuery. You need to implement a low-overhead solution that initiates data processing as soon as the files are loaded into Cloud Storage. What should you do?

Options:

A.  

Use Cloud Composer sensors to detect files loading in Cloud Storage. Create a Dataproc cluster, and use a Composer task to execute a job on the cluster to process and load the data into BigQuery.

B.  

Schedule a direct acyclic graph (DAG) in Cloud Composer to run hourly to batch load the data from Cloud Storage to BigQuery, and process the data in BigQuery using SQL.

C.  

Use Dataflow to implement a streaming pipeline using anOBJECT_FINALIZEnotification from Pub/Sub to read the data from Cloud Storage, perform the transformations, and write the data to BigQuery.

D.  

Create a Cloud Data Fusion job to process and load the data from Cloud Storage into BigQuery. Create anOBJECT_FINALIZE notification in Pub/Sub, and trigger a Cloud Run function to start the Cloud Data Fusion job as soon as new files are loaded.

Discussion 0
Question # 12

You manage a web application that stores data in a Cloud SQL database. You need to improve the read performance of the application by offloading read traffic from the primary database instance. You want to implement a solution that minimizes effort and cost. What should you do?

Options:

A.  

Use Cloud CDN to cache frequently accessed data.

B.  

Store frequently accessed data in a Memorystore instance.

C.  

Migrate the database to a larger Cloud SQL instance.

D.  

Enable automatic backups, and create a read replica of the Cloud SQL instance.

Discussion 0
Question # 13

Your team is building several data pipelines that contain a collection of complex tasks and dependencies that you want to execute on a schedule, in a specific order. The tasks and dependencies consist of files in Cloud Storage, Apache Spark jobs, and data in BigQuery. You need to design a system that can schedule and automate these data processing tasks using a fully managed approach. What should you do?

Options:

A.  

Use Cloud Scheduler to schedule the jobs to run.

B.  

Use Cloud Tasks to schedule and run the jobs asynchronously.

C.  

Create directed acyclic graphs (DAGs) in Cloud Composer. Use the appropriate operators to connect to Cloud Storage, Spark, and BigQuery.

D.  

Create directed acyclic graphs (DAGs) in Apache Airflow deployed on Google Kubernetes Engine. Use the appropriate operators to connect to Cloud Storage, Spark, and BigQuery.

Discussion 0
Question # 14

You work for a global financial services company that trades stocks 24/7. You have a Cloud SGL for PostgreSQL user database. You need to identify a solution that ensures that the database is continuously operational, minimizes downtime, and will not lose any data in the event of a zonal outage. What should you do?

Options:

A.  

Continuously back up the Cloud SGL instance to Cloud Storage. Create a Compute Engine instance with PostgreSCL in a different region. Restore the backup in the Compute Engine instance if a failure occurs.

B.  

Create a read replica in another region. Promote the replica to primary if a failure occurs.

C.  

Configure and create a high-availability Cloud SQL instance with the primary instance in zone A and a secondary instance in any zone other than zone

A.  

D.  

Create a read replica in the same region but in a different zone.

Discussion 0
Question # 15

Your company uses Looker to generate and share reports with various stakeholders. You have a complex dashboard with several visualizations that needs to be delivered to specific stakeholders on a recurring basis, with customized filters applied for each recipient. You need an efficient and scalable solution to automate the delivery of this customized dashboard. You want to follow the Google-recommended approach. What should you do?

Options:

A.  

Create a separate LookML model for each stakeholder with predefined filters, and schedule the dashboards using the Looker Scheduler.

B.  

Create a script using the Looker Python SDK, and configure user attribute filter values. Generate a new scheduled plan for each stakeholder.

C.  

Embed the Looker dashboard in a custom web application, and use the application's scheduling features to send the report with personalized filters.

D.  

Use the Looker Scheduler with a user attribute filter on the dashboard, and send the dashboard with personalized filters to each stakeholder based on their attributes.

Discussion 0
Question # 16

You are storing data in Cloud Storage for a machine learning project. The data is frequently accessed during the model training phase, minimally accessed after 30 days, and unlikely to be accessed after 90 days. You need to choose the appropriate storage class for the different stages of the project to minimize cost. What should you do?

Options:

A.  

Store the data in Nearline storage during the model training phase. Transition the data to Coldline storage 30 days after model deployment, and to Archive storage 90 days after model deployment.

B.  

Store the data in Standard storage during the model training phase. Transition the data to Nearline storage 30 days after model deployment, and to Coldline storage 90 days after model deployment.

C.  

Store the data in Nearline storage during the model training phase. Transition the data to Archive storage 30 days after model deployment, and to Coldline storage 90 days after model deployment.

D.  

Store the data in Standard storage during the model training phase. Transition the data to Durable Reduced Availability (DRA) storage 30 days after model deployment, and to Coldline storage 90 days after model deployment.

Discussion 0
Question # 17

You are a database administrator managing sales transaction data by region stored in a BigQuery table. You need to ensure that each sales representative can only see the transactions in their region. What should you do?

Options:

A.  

Add a policy tag in BigQuery.

B.  

Create a row-level access policy.

C.  

Create a data masking rule.

D.  

Grant the appropriate 1AM permissions on the dataset.

Discussion 0
Question # 18

You are a data analyst at your organization. You have been given a BigQuery dataset that includes customer information. The dataset contains inconsistencies and errors, such as missing values, duplicates, and formatting issues. You need to effectively and quickly clean the data. What should you do?

Options:

A.  

Develop a Dataflow pipeline to read the data from BigQuery, perform data quality rules and transformations, and write the cleaned data back to BigQuery.

B.  

Use Cloud Data Fusion to create a data pipeline to read the data from BigQuery, perform data quality transformations, and write the clean data back to BigQuery.

C.  

Export the data from BigQuery to CSV files. Resolve the errors using a spreadsheet editor, and re-import the cleaned data into BigQuery.

D.  

Use BigQuery's built-in functions to perform data quality transformations.

Discussion 0
Question # 19

You work for a retail company that collects customer data from various sources:

    Online transactions: Stored in a MySQL database

    Customer feedback: Stored as text files on a company server

    Social media activity: Streamed in real-time from social media platformsYou need to design a data pipeline to extract and load the data into the appropriate Google Cloud storage system(s) for further analysis and ML model training. What should you do?

Options:

A.  

Copy the online transactions data into Cloud SQL for MySQL. Import the customer feedback into BigQuery. Stream the social media activity into Cloud Storage.

B.  

Extract and load the online transactions data into BigQuery. Load the customer feedback data into Cloud Storage. Stream the social media activity by using Pub/Sub and Dataflow, and store the data in BigQuery.

C.  

Extract and load the online transactions data, customer feedback data, and social media activity into Cloud Storage.

D.  

Extract and load the online transactions data into Bigtable. Import the customer feedback data into Cloud Storage. Store the social media activity in Cloud SQL for MySQL.

Discussion 0
Question # 20

You are working with a small dataset in Cloud Storage that needs to be transformed and loaded into BigQuery for analysis. The transformation involves simple filtering and aggregation operations. You want to use the most efficient and cost-effective data manipulation approach. What should you do?

Options:

A.  

Use Dataproc to create an Apache Hadoop cluster, perform the ETL process using Apache Spark, and load the results into BigQuery.

B.  

Use BigQuery's SQL capabilities to load the data from Cloud Storage, transform it, and store the results in a new BigQuery table.

C.  

Create a Cloud Data Fusion instance and visually design an ETL pipeline that reads data from Cloud Storage, transforms it using built-in transformations, and loads the results into BigQuery.

D.  

Use Dataflow to perform the ETL process that reads the data from Cloud Storage, transforms it using Apache Beam, and writes the results to BigQuery.

Discussion 0
Get Associate-Data-Practitioner dumps and pass your exam in 24 hours!

Free Exams Sample Questions