11.11 Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65pass65

Good News !!! Data-Engineer-Associate AWS Certified Data Engineer - Associate (DEA-C01) is now Stable and With Pass Result

Data-Engineer-Associate Practice Exam Questions and Answers

AWS Certified Data Engineer - Associate (DEA-C01)

Last Update 4 days ago
Total Questions : 218

AWS Certified Data Engineer is stable now with all latest exam questions are added 4 days ago. Incorporating Data-Engineer-Associate practice exam questions into your study plan is more than just a preparation strategy.

Data-Engineer-Associate exam questions often include scenarios and problem-solving exercises that mirror real-world challenges. Working through Data-Engineer-Associate dumps allows you to practice pacing yourself, ensuring that you can complete all AWS Certified Data Engineer practice test within the allotted time frame.

Data-Engineer-Associate PDF

Data-Engineer-Associate PDF (Printable)
$43.75
$124.99

Data-Engineer-Associate Testing Engine

Data-Engineer-Associate PDF (Printable)
$50.75
$144.99

Data-Engineer-Associate PDF + Testing Engine

Data-Engineer-Associate PDF (Printable)
$63.7
$181.99
Question # 1

A company needs to implement a data mesh architecture for trading, risk, and compliance teams. Each team has its own data but needs to share views. They have 1,000+ tables in 50 Glue databases. All teams use Athena and Redshift, and compliance requires full auditing and PII access control.

Options:

A.  

Create views in Athena for on-demand analysis. Use the Athena views in Amazon Redshift to perform cross-domain analytics. Use AWS CloudTrail to audit data access. Use AWS Lake Formation to establish fine-grained access control.

B.  

Use AWS Glue Data Catalog views. Use CloudTrail logs and Lake Formation to manage permissions.

C.  

Use Lake Formation to set up cross-domain access to tables. Set up fine-grained access controls.

D.  

Create materialized views and enable Amazon Redshift datashares for each domain.

Discussion 0
Question # 2

A company wants to analyze sales records that the company stores in a MySQL database. The company wants to correlate the records with sales opportunities identified by Salesforce.

The company receives 2 GB erf sales records every day. The company has 100 GB of identified sales opportunities. A data engineer needs to develop a process that will analyze and correlate sales records and sales opportunities. The process must run once each night.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.  

Use Amazon Managed Workflows for Apache Airflow (Amazon MWAA) to fetch both datasets. Use AWS Lambda functions to correlate the datasets. Use AWS Step Functions to orchestrate the process.

B.  

Use Amazon AppFlow to fetch sales opportunities from Salesforce. Use AWS Glue to fetch sales records from the MySQL database. Correlate the sales records with the sales opportunities. Use Amazon Managed Workflows for Apache Airflow (Amazon MWAA) to orchestrate the process.

C.  

Use Amazon AppFlow to fetch sales opportunities from Salesforce. Use AWS Glue to fetch sales records from the MySQL database. Correlate the sales records with sales opportunities. Use AWS Step Functions to orchestrate the process.

D.  

Use Amazon AppFlow to fetch sales opportunities from Salesforce. Use Amazon Kinesis Data Streams to fetch sales records from the MySQL database. Use Amazon Managed Service for Apache Flink to correlate the datasets. Use AWS Step Functions to orchestrate the process.

Discussion 0
Question # 3

A company has an Amazon Redshift data warehouse that users access by using a variety of IAM roles. More than 100 users access the data warehouse every day.

The company wants to control user access to the objects based on each user's job role, permissions, and how sensitive the data is.

Which solution will meet these requirements?

Options:

A.  

Use the role-based access control (RBAC) feature of Amazon Redshift.

B.  

Use the row-level security (RLS) feature of Amazon Redshift.

C.  

Use the column-level security (CLS) feature of Amazon Redshift.

D.  

Use dynamic data masking policies in Amazon Redshift.

Discussion 0
Question # 4

A data engineer needs to use Amazon Neptune to develop graph applications.

Which programming languages should the engineer use to develop the graph applications? (Select TWO.)

Options:

A.  

Gremlin

B.  

SQL

C.  

ANSI SQL

D.  

SPARQL

E.  

Spark SQL

Discussion 0
Question # 5

A company wants to use Apache Spark jobs that run on an Amazon EMR cluster to process streaming data. The Spark jobs will transform and store the data in an Amazon S3 bucket. The company will use Amazon Athena to perform analysis.

The company needs to optimize the data format for analytical queries.

Which solutions will meet these requirements with the SHORTEST query times? (Select TWO.)

Options:

A.  

Use Avro format. Use AWS Glue Data Catalog to track schema changes.

B.  

Use ORC format. Use AWS Glue Data Catalog to track schema changes.

C.  

Use Apache Parquet format. Use an external Amazon DynamoDB table to track schema changes.

D.  

Use Apache Parquet format. Use AWS Glue Data Catalog to track schema changes.

E.  

Use ORC format. Store schema definitions in separate files in Amazon S3.

Discussion 0
Question # 6

A company stores daily records of the financial performance of investment portfolios in .csv format in an Amazon S3 bucket. A data engineer uses AWS Glue crawlers to crawl the S3 data.

The data engineer must make the S3 data accessible daily in the AWS Glue Data Catalog.

Which solution will meet these requirements?

Options:

A.  

Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Configure the output destination to a new path in the existing S3 bucket.

B.  

Create an IAM role that includes the AWSGlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Specify a database name for the output.

C.  

Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Specify a database name for the output.

D.  

Create an IAM role that includes the AWSGlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Configure the output destination to a new path in the existing S3 bucket.

Discussion 0
Question # 7

A data engineering team is using an Amazon Redshift data warehouse for operational reporting. The team wants to prevent performance issues that might result from long- running queries. A data engineer must choose a system table in Amazon Redshift to record anomalies when a query optimizer identifies conditions that might indicate performance issues.

Which table views should the data engineer use to meet this requirement?

Options:

A.  

STL USAGE CONTROL

B.  

STL ALERT EVENT LOG

C.  

STL QUERY METRICS

D.  

STL PLAN INFO

Discussion 0
Question # 8

A company wants to migrate an application and an on-premises Apache Kafka server to AWS. The application processes incremental updates that an on-premises Oracle database sends to the Kafka server. The company wants to use the replatform migration strategy instead of the refactor strategy.

Which solution will meet these requirements with the LEAST management overhead?

Options:

A.  

Amazon Kinesis Data Streams

B.  

Amazon Managed Streaming for Apache Kafka (Amazon MSK) provisioned cluster

C.  

Amazon Data Firehose

D.  

Amazon Managed Streaming for Apache Kafka (Amazon MSK) Serverless

Discussion 0
Question # 9

A financial company wants to implement a data mesh. The data mesh must support centralized data governance, data analysis, and data access control. The company has decided to use AWS Glue for data catalogs and extract, transform, and load (ETL) operations.

Which combination of AWS services will implement a data mesh? (Choose two.)

Options:

A.  

Use Amazon Aurora for data storage. Use an Amazon Redshift provisioned cluster for data analysis.

B.  

Use Amazon S3 for data storage. Use Amazon Athena for data analysis.

C.  

Use AWS Glue DataBrewfor centralized data governance and access control.

D.  

Use Amazon RDS for data storage. Use Amazon EMR for data analysis.

E.  

Use AWS Lake Formation for centralized data governance and access control.

Discussion 0
Question # 10

A data engineer is optimizing query performance in Amazon Athena notebooks that use Apache Spark to analyze large datasets that are stored in Amazon S3. The data is partitioned. An AWS Glue crawler updates the partitions.

The data engineer wants to minimize the amount of data that is scanned to improve efficiency of Athena queries.

Which solution will meet these requirements?

Options:

A.  

Apply partition filters in the queries.

B.  

Increase the frequency of AWS Glue crawler invocations to update the data catalog more often.

C.  

Organize the data that is in Amazon S3 by using a nested directory structure.

D.  

Configure Spark to use in-memory caching for frequently accessed data.

Discussion 0
Get Data-Engineer-Associate dumps and pass your exam in 24 hours!

Free Exams Sample Questions