Pre-Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

MLA-C01 AWS Certified Machine Learning Engineer - Associate is now Stable and With Pass Result | Test Your Knowledge for Free

Exams4sure Dumps

MLA-C01 Practice Questions

AWS Certified Machine Learning Engineer - Associate

Last Update 4 days ago
Total Questions : 241

Dive into our fully updated and stable MLA-C01 practice test platform, featuring all the latest AWS Certified Associate exam questions added this week. Our preparation tool is more than just a Amazon Web Services study aid; it's a strategic advantage.

Our free AWS Certified Associate practice questions crafted to reflect the domains and difficulty of the actual exam. The detailed rationales explain the 'why' behind each answer, reinforcing key concepts about MLA-C01. Use this test to pinpoint which areas you need to focus your study on.

MLA-C01 PDF

MLA-C01 PDF (Printable)
$43.75
$124.99

MLA-C01 Testing Engine

MLA-C01 PDF (Printable)
$50.75
$144.99

MLA-C01 PDF + Testing Engine

MLA-C01 PDF (Printable)
$63.7
$181.99
Question # 51

A company regularly receives new training data from the vendor of an ML model. The vendor delivers cleaned and prepared data to the company ' s Amazon S3 bucket every 3-4 days.

The company has an Amazon SageMaker pipeline to retrain the model. An ML engineer needs to implement a solution to run the pipeline when new data is uploaded to the S3 bucket.

Which solution will meet these requirements with the LEAST operational effort?

Options:

A.  

Create an S3 Lifecycle rule to transfer the data to the SageMaker training instance and to initiate training.

B.  

Create an AWS Lambda function that scans the S3 bucket. Program the Lambda function to initiate the pipeline when new data is uploaded.

C.  

Create an Amazon EventBridge rule that has an event pattern that matches the S3 upload. Configure the pipeline as the target of the rule.

D.  

Use Amazon Managed Workflows for Apache Airflow (Amazon MWAA) to orchestrate the pipeline when new data is uploaded.

Discussion 0
Question # 52

An ML engineer is using Amazon Quick Suite (previously known as Amazon QuickSight) anomaly detection to detect very high or very low machine operating temperatures compared to normal. The ML engineer sets the Severity parameter to Low and above. The ML engineer sets the Direction parameter to All.

What effect will the ML engineer observe in the anomaly detection results if the ML engineer changes the Direction parameter to Lower than expected?

Options:

A.  

Increased anomaly identification frequency and increased recall

B.  

Decreased anomaly identification frequency and decreased recall

C.  

Increased anomaly identification frequency and decreased recall

D.  

Decreased anomaly identification frequency and increased recall

Discussion 0
Question # 53

An ML engineer needs to process thousands of existing CSV objects and new CSV objects that are uploaded. The CSV objects are stored in a central Amazon S3 bucket and have the same number of columns. One of the columns is a transaction date. The ML engineer must query the data based on the transaction date.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Use an Amazon Athena CREATE TABLE AS SELECT (CTAS) statement to create a table based on the transaction date from data in the central S3 bucket. Query the objects from the table.

B.  

Create a new S3 bucket for processed data. Set up S3 replication from the central S3 bucket to the new S3 bucket. Use S3 Object Lambda to query the objects based on transaction date.

C.  

Create a new S3 bucket for processed data. Use AWS Glue for Apache Spark to create a job to query the CSV objects based on transaction date. Configure the job to store the results in the new S3 bucket. Query the objects from the new S3 bucket.

D.  

Create a new S3 bucket for processed data. Use Amazon Data Firehose to transfer the data from the central S3 bucket to the new S3 bucket. Configure Firehose to run an AWS Lambda function to query the data based on transaction date.

Discussion 0
Question # 54

A company is developing an application that reads animal descriptions from user prompts and generates images based on the information in the prompts. The application reads a message from an Amazon Simple Queue Service (Amazon SQS) queue. Then the application uses Amazon Titan Image Generator on Amazon Bedrock to generate an image based on the information in the message. Finally, the application removes the message from SQS queue.

Which IAM permissions should the company assign to the application ' s IAM role? (Select TWO.)

Options:

A.  

Allow the bedrock:InvokeModel action for the Amazon Titan Image Generator resource.

B.  

Allow the bedrock:Get* action for the Amazon Titan Image Generator resource.

C.  

Allow the sqs:ReceiveMessage action and the sqs:DeleteMessage action for the SQS queue resource.

D.  

Allow the sqs:GetQueueAttributes action and the sqs:DeleteMessage action for the SQS queue resource.

E.  

Allow the sagemaker:PutRecord* action for the Amazon Titan Image Generator resource.

Discussion 0
Question # 55

An ML engineer uses one ML framework to train multiple ML models. The ML engineer needs to optimize inference costs and host the models on Amazon SageMaker AI.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.  

Create a multi-container inference endpoint for direct invocation.

B.  

Create a multi-model inference endpoint for all the models.

C.  

Create a multi-container inference endpoint for sequential invocation.

D.  

Create multiple single-model inference endpoints for each model.

Discussion 0
Question # 56

A company has an existing Amazon SageMaker AI model (v1) on a production endpoint. The company develops a new model version (v2) and needs to test v2 in production before substituting v2 for v1.

The company needs to minimize the risk of v2 generating incorrect output in production and must prevent any disruption of production traffic during the change.

Which solution will meet these requirements?

Options:

A.  

Create a second production variant for v2. Assign 1% of the traffic to v2 and 99% to v1. Collect all output of v2 in Amazon S3. If v2 performs as expected, switch all traffic to v2.

B.  

Create a second production variant for v2. Assign 10% of the traffic to v2 and 90% to v1. Collect all output of v2 in Amazon S3. If v2 performs as expected, switch all traffic to v2.

C.  

Deploy v2 to a new endpoint. Turn on data capture for the production endpoint. Send 100% of the input data to v2.

D.  

Deploy v2 into a shadow variant that samples 100% of the inference requests. Collect all output in Amazon S3. If v2 performs as expected, promote v2 to production.

Discussion 0
Question # 57

A company has built more than 50 models and deployed the models on Amazon SageMaker Al as real-time inference

endpoints. The company needs to reduce the costs of the SageMaker Al inference endpoints. The company used the same

ML framework to build the models. The company ' s customers require low-latency access to the models.

Select and order the correct steps from the following list to reduce the cost of inference and keep latency low. Select each

step one time or not at all. (Select and order FIV

E.  

)

· Create an endpoint configuration that references a multi-model container.

. Create a SageMaker Al model with multi-model endpoints enabled.

. Deploy a real-time inference endpoint by using the endpoint configuration.

. Deploy a serverless inference endpoint configuration by using the endpoint configuration.

· Spread the existing models to multiple different Amazon S3 bucket paths.

. Upload the existing models to the same Amazon S3 bucket path.

. Update the models to use the new endpoint I

D.  

Pass the model IDs to the new endpoint.

Question # 57

Options:

Discussion 0
Question # 58

An ML engineer wants to run a training job on Amazon SageMaker AI. The training job will train a neural network by using multiple GPUs. The training dataset is stored in Parquet format.

The ML engineer discovered that the Parquet dataset contains files too large to fit into the memory of the SageMaker AI training instances.

Which solution will fix the memory problem?

Options:

A.  

Attach an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD volume to the instance. Store the files in the EBS volume.

B.  

Repartition the Parquet files by using Apache Spark on Amazon EMR. Use the repartitioned files for the training job.

C.  

Change the instance type to Memory Optimized instances with sufficient memory for the training job.

D.  

Use the SageMaker AI distributed data parallelism (SMDDP) library with multiple instances to split the memory usage.

Discussion 0
Question # 59

A company has a team of data scientists who use Amazon SageMaker notebook instances to test ML models. When the data scientists need new permissions, the company attaches the permissions to each individual role that was created during the creation of the SageMaker notebook instance.

The company needs to centralize management of the team ' s permissions.

Which solution will meet this requirement?

Options:

A.  

Create a single IAM role that has the necessary permissions. Attach the role to each notebook instance that the team uses.

B.  

Create a single IAM group. Add the data scientists to the group. Associate the group with each notebook instance that the team uses.

C.  

Create a single IAM user. Attach the AdministratorAccess AWS managed IAM policy to the user. Configure each notebook instance to use the IAM user.

D.  

Create a single IAM group. Add the data scientists to the group. Create an IAM role. Attach the AdministratorAccess AWS managed IAM policy to the role. Associate the role with the group. Associate the group with each notebook instance that the team uses.

Discussion 0
Question # 60

A company uses Amazon SageMaker AI to create ML models. The data scientists need fine-grained control of ML workflows, DAG visualization, experiment history, and model governance for auditing and compliance.

Which solution will meet these requirements?

Options:

A.  

Use AWS CodePipeline with SageMaker Studio and SageMaker ML Lineage Tracking.

B.  

Use AWS CodePipeline with SageMaker Experiments.

C.  

Use SageMaker Pipelines with SageMaker Studio and SageMaker ML Lineage Tracking.

D.  

Use SageMaker Pipelines with SageMaker Experiments.

Discussion 0
Get MLA-C01 dumps and pass your exam in 24 hours!

Free Exams Sample Questions