Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
To ensure comprehensive monitoring of your data pipelines in Databricks, tracking both performance metrics and data quality, which combination of tools and techniques would you recommend?
A
Use Databricks' built-in dashboard for performance monitoring and manually review data samples for quality.
B
Configure Azure Monitor with Databricks metrics and logs, and use Data Quality rules within Azure Purview.
C
Leverage MLflow for monitoring job performance and integrate Apache Griffin for data quality checks.
D
Implement custom logging in your Spark jobs to track performance and data quality metrics, storing logs in Delta tables.