
Ultimate access to all questions.
Your team is working on a data pipeline that processes data from a financial institution. The data includes transaction records with sensitive information. You have been tasked with ensuring the data quality of the transaction records dataset. Describe the steps you would take to run data quality checks on the transaction records dataset and explain how you would define data quality rules to ensure the data is consistent and accurate.
A
Run data quality checks by manually inspecting each transaction record and identifying any inconsistencies or inaccuracies.
B
Use AWS Glue to run data quality checks by writing custom scripts that identify inconsistencies or inaccuracies in the transaction records.
C
Define data quality rules using AWS Glue DataBrew by creating a new project, selecting the transaction records dataset, and specifying rules to ensure the data is consistent and accurate.
D
Ignore data quality checks and assume the data is consistent and accurate.