
Ultimate access to all questions.
Your team is working on a data pipeline that processes large datasets using AWS Glue. You have been tasked with ensuring the data quality of the incoming data. Describe the steps you would take to run data quality checks while processing the data, and explain how you would define data quality rules using AWS Glue DataBrew.
A
Run data quality checks by manually inspecting the data and identifying any inconsistencies.
B
Use AWS Glue to run data quality checks by writing custom scripts that check for empty fields and other data quality issues.
C
Define data quality rules using AWS Glue DataBrew by creating a new project, selecting the dataset, and specifying the rules to be applied.
D
Ignore data quality checks and focus solely on processing the data as quickly as possible.