Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
Ensuring data quality and consistency is a cornerstone of building reliable Azure Databricks-based data pipelines. Which approach offers an automated and scalable solution for testing data quality and consistency across your datasets?
A
Writing custom validation scripts in Databricks notebooks that run as part of the pipeline execution, outputting data quality metrics
B
Relying on Azure Data Factory‘s data flow debug features to validate data quality without additional testing in Databricks
C
Manual review of random data samples before and after processing in Databricks
D
Utilizing a third-party tool specifically designed for data quality testing, integrated with Azure Databricks via APIs