
Answer-first summary for fast verification
Answer: Use an open source data lake format to merge the data source with the S3 data lake to insert the new data and update the existing data.
Option C is CORRECT because using an open-source data lake format like Apache Hudi, Delta Lake, or Apache Iceberg allows efficient handling of large-scale data ingestion and change data capture (CDC) operations. These formats are designed to handle incremental data processing, allowing you to efficiently merge new data with existing data in Amazon S3 without requiring additional compute resources. This approach is cost-effective because it leverages S3’s low storage costs and the efficiency of open-source data formats to manage changes.
Author: Ritesh Yadav
Ultimate access to all questions.
Question 50/60
A company uses Amazon S3 to store semi-structured data in a transactional data lake. Some of the data files are small, but other data files are tens of terabytes.
A data engineer must perform a change data capture (CDC) operation to identify changed data from the data source. The data source sends a full snapshot as a JSON file every day and ingests the changed data into the data lake.
Which solution will capture the changed data MOST cost-effectively?
A
Create an AWS Lambda function to identify the changes between the previous data and the current data. Configure the Lambda function to ingest the changes into the data lake.
B
Ingest the data into Amazon RDS for MySQL. Use AWS Database Migration Service (AWS DMS) to write the changed data to the data lake.
C
Use an open source data lake format to merge the data source with the S3 data lake to insert the new data and update the existing data.
D
Ingest the data into an Amazon Aurora MySQL DB instance that runs Aurora Serverless. Use AWS Database Migration Service (AWS DMS) to write the changed data to the data lake.
No comments yet.