
Answer-first summary for fast verification
Answer: Modify the AWS Glue job to copy the rows into a staging Redshift table. Add SQL commands to update the existing rows with new values from the staging Redshift table.
Option A is CORRECT because modifying the AWS Glue job to copy the rows into a staging Redshift table and then adding SQL commands to update the existing rows with new values from the staging table allows you to handle duplicates effectively. This approach ensures that only new or updated records are inserted into the Redshift tables, avoiding duplicate entries.
Author: Ritesh Yadav
Ultimate access to all questions.
Question 18/60
A company uploads .csv files to an Amazon S3 bucket. The company's data platform team has set up an AWS Glue crawler to perform data discovery and to create the tables and schemas.
An AWS Glue job writes processed data from the tables to an Amazon Redshift database. The AWS Glue job handles column mapping and creates the Amazon Redshift tables in the Redshift database appropriately.
If the company reruns the AWS Glue job for any reason, duplicate records are introduced into the Amazon Redshift tables. The company needs a solution that will update the Redshift tables without duplicates.
Which solution will meet these requirements?
A
Modify the AWS Glue job to copy the rows into a staging Redshift table. Add SQL commands to update the existing rows with new values from the staging Redshift table.
B
Modify the AWS Glue job to load the previously inserted data into a MySQL database. Perform an upsert operation in the MySQL database. Copy the results to the Amazon Redshift tables.
C
Use Apache Spark's DataFrame dropDuplicates() API to eliminate duplicates. Write the data to the Redshift tables.
D
Use the AWS Glue ResolveChoice built-in transform to select the value of the column from the most recent record.
No comments yet.