
Ultimate access to all questions.
You are working on a data pipeline that processes data from a human resources department. The data includes employee records with information about employee details and job positions. You have been tasked with ensuring the data quality of the employee records dataset. Describe the steps you would take to run data quality checks on the employee records dataset and explain how you would define data quality rules to identify and resolve data inconsistencies related to employee job positions.
A
Run data quality checks by manually inspecting each employee record and identifying inconsistencies in job positions.
B
Use AWS Glue to run data quality checks by writing custom scripts that identify inconsistencies in job positions based on specific patterns.
C
Define data quality rules using AWS Glue DataBrew by creating a new project, selecting the employee records dataset, and specifying rules to identify and resolve data inconsistencies related to employee job positions.
D
Ignore data quality checks and assume the job positions are consistent.