Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

Professional-Machine-Learning-Engineer Google Professional Machine Learning Engineer is now Stable and With Pass Result | Test Your Knowledge for Free

Exams4sure Dumps

Professional-Machine-Learning-Engineer Practice Questions

Google Professional Machine Learning Engineer

Last Update 3 days ago
Total Questions : 296

Dive into our fully updated and stable Professional-Machine-Learning-Engineer practice test platform, featuring all the latest Machine Learning Engineer exam questions added this week. Our preparation tool is more than just a Google study aid; it's a strategic advantage.

Our free Machine Learning Engineer practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about Professional-Machine-Learning-Engineer. Use this test to pinpoint which areas you need to focus your study on.

Professional-Machine-Learning-Engineer PDF

Professional-Machine-Learning-Engineer PDF (Printable)
$43.75
$124.99

Professional-Machine-Learning-Engineer Testing Engine

Professional-Machine-Learning-Engineer PDF (Printable)
$50.75
$144.99

Professional-Machine-Learning-Engineer PDF + Testing Engine

Professional-Machine-Learning-Engineer PDF (Printable)
$63.7
$181.99
Question # 31

You work for a retailer that sells clothes to customers around the world. You have been tasked with ensuring that ML models are built in a secure manner. Specifically, you need to protect sensitive customer data that might be used in the models. You have identified four fields containing sensitive data that are being used by your data science team: AGE, IS_EXISTING_CUSTOMER, LATITUDE_LONGITUDE, and SHIRT_SIZ

E.  

What should you do with the data before it is made available to the data science team for training purposes?

Options:

A.  

Tokenize all of the fields using hashed dummy values to replace the real values.

B.  

Use principal component analysis (PCA) to reduce the four sensitive fields to one PCA vector.

C.  

Coarsen the data by putting AGE into quantiles and rounding LATITUDE_LONGTTUDE into single precision. The other two fields are already as coarse as possible.

D.  

Remove all sensitive data fields, and ask the data science team to build their models using non-sensitive data.

Discussion 0
Question # 32

You recently built the first version of an image segmentation model for a self-driving car. After deploying the model, you observe a decrease in the area under the curve (AUC) metric. When analyzing the video recordings, you also discover that the model fails in highly congested traffic but works as expected when there is less traffic. What is the most likely reason for this result?

Options:

A.  

The model is overfitting in areas with less traffic and underfitting in areas with more traffic.

B.  

AUC is not the correct metric to evaluate this classification model.

C.  

Too much data representing congested areas was used for model training.

D.  

Gradients become small and vanish while backpropagating from the output to input nodes.

Discussion 0
Question # 33

You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an “Out of Memory” error. What should you do?

Options:

A.  

Use batch prediction mode instead of online mode.

B.  

Send the request again with a smaller batch of instances.

C.  

Use base64 to encode your data before using it for prediction.

D.  

Apply for a quota increase for the number of prediction requests.

Discussion 0
Question # 34

Your team is working on an NLP research project to predict political affiliation of authors based on articles they have written. You have a large training dataset that is structured like this:

Question # 34

You followed the standard 80%-10%-10% data distribution across the training, testing, and evaluation subsets. How should you distribute the training examples across the train-test-eval subsets while maintaining the 80-10-10 proportion?

A)

Question # 34

B)

Question # 34

C)

Question # 34

D)

Question # 34

Options:

A.  

Option A

B.  

Option B

C.  

Option C

D.  

Option D

Discussion 0
Question # 35

You are developing a recommendation engine for an online clothing store. The historical customer transaction data is stored in BigQuery and Cloud Storage. You need to perform exploratory data analysis (EDA), preprocessing and model training. You plan to rerun these EDA, preprocessing, and training steps as you experiment with different types of algorithms. You want to minimize the cost and development effort of running these steps as you experiment. How should you configure the environment?

Options:

A.  

Create a Vertex Al Workbench user-managed notebook using the default VM instance, and use the %%bigquery magic commands in Jupyter to query the tables.

B.  

Create a Vertex Al Workbench managed notebook to browse and query the tables directly from the JupyterLab interface.

C.  

Create a Vertex Al Workbench user-managed notebook on a Dataproc Hub. and use the %%bigquery magic commands in Jupyter to query the tables.

D.  

Create a Vertex Al Workbench managed notebook on a Dataproc cluster, and use the spark-bigquery-connector to access the tables.

Discussion 0
Question # 36

You are the lead ML engineer on a mission-critical project that involves analyzing massive datasets using Apache Spark. You need to establish a robust environment that allows your team to rapidly prototype Spark models using Jupyter notebooks. What is the fastest way to achieve this?

Options:

A.  

Configure a Compute Engine instance with Spark and use Jupyter notebooks.

B.  

Set up a Dataproc cluster with Spark and use Jupyter notebooks.

C.  

Set up a Vertex AI Workbench instance with a Spark kernel.

D.  

Use Colab Enterprise with a Spark kernel.

Discussion 0
Question # 37

You have developed a fraud detection model for a large financial institution using Vertex AI. The model achieves high accuracy, but stakeholders are concerned about potential bias based on customer demographics. You have been asked to provide insights into the model ' s decision-making process and identify any fairness issues. What should you do?

Options:

A.  

Enable Vertex AI Model Monitoring to detect training-serving skew. Configure an alert to send an email when the skew or drift for a model’s feature exceeds a predefined threshold. Retrain the model by appending new data to existing training data.

B.  

Compile a dataset of unfair predictions. Use Vertex AI Vector Search to identify similar data points in the model ' s predictions. Report these data points to the stakeholders.

C.  

Use feature attribution in Vertex AI to analyze model predictions and the impact of each feature on the model ' s predictions.

D.  

Create feature groups using Vertex AI Feature Store to segregate customer demographic features and non-demographic features. Retrain the model using only non-demographic features.

Discussion 0
Question # 38

Your team needs to build a model that predicts whether images contain a driver ' s license, passport, or credit card. The data engineering team already built the pipeline and generated a dataset composed of 10,000 images with driver ' s licenses, 1,000 images with passports, and 1,000 images with credit cards. You now have to train a model with the following label map: [ ' driversjicense ' , ' passport ' , ' credit_card ' ]. Which loss function should you use?

Options:

A.  

Categorical hinge

B.  

Binary cross-entropy

C.  

Categorical cross-entropy

D.  

Sparse categorical cross-entropy

Discussion 0
Question # 39

You work for a social media company. You need to detect whether posted images contain cars. Each training example is a member of exactly one class. You have trained an object detection neural network and deployed the model version to Al Platform Prediction for evaluation. Before deployment, you created an evaluation job and attached it to the Al Platform Prediction model version. You notice that the precision is lower than your business requirements allow. How should you adjust the model ' s final layer softmax threshold to increase precision?

Options:

A.  

Increase the recall

B.  

Decrease the recall.

C.  

Increase the number of false positives

D.  

Decrease the number of false negatives

Discussion 0
Question # 40

You trained a text classification model. You have the following SignatureDefs:

Question # 40

What is the correct way to write the predict request?

Options:

A.  

data = json.dumps({ " signature_name " : " serving_default ' \ " instances " : [fab ' , ' be1, ' cd ' ]]})

B.  

data = json dumps({ " signature_name " : " serving_default " ! " instances " : [[ ' a ' , ' b ' , " c " , ' d ' , ' e ' , ' f ' ]]})

C.  

data = json.dumps({ " signature_name " : " serving_default, " instances " : [[ ' a ' , ' b\ ' c ' 1, [d\ ' e\ T]]})

D.  

data = json dumps({ " signature_name " : f,serving_default " , " instances " : [[ ' a ' , ' b ' ], [c\ ' d ' ], [ ' e\ T]]})

Discussion 0
Get Professional-Machine-Learning-Engineer dumps and pass your exam in 24 hours!

Free Exams Sample Questions