
Answer-first summary for fast verification
Answer: Use AWS Glue DataBrew recipes to read and transform the CSV files.
Option D is CORRECT because AWS Glue DataBrew provides a visual interface with over 250 built-in transformations, enabling data engineers to perform tasks such as renaming columns, removing specific columns, skipping rows, creating new columns based on existing data, and filtering records without writing code. This low-code approach minimizes development effort while effectively meeting the specified data processing requirements.
Author: Ritesh Yadav
Ultimate access to all questions.
Question 26/58
A company stores CSV files in an Amazon S3 bucket. A data engineer needs to process the data in the CSV files and store the processed data in a new S3 bucket.
The process needs to rename a column, remove specific columns, ignore the second row of each file, create a new column based on the values of the first row of the data, and filter the results by a numeric value of a column.
Which solution will meet these requirements with the LEAST development effort?
A
Use AWS Glue Python jobs to read and transform the CSV files.
B
Use an AWS Glue custom crawler to read and transform the CSV files.
C
Use an AWS Glue workflow to build a set of jobs to crawl and transform the CSV files.
D
Use AWS Glue DataBrew recipes to read and transform the CSV files.
No comments yet.