Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
Your team has encountered intermittent performance issues with a Spark-based data pipeline in Databricks. What is the most effective method to monitor and alert on these performance anomalies?
A
Use Databricks' built-in monitoring tools for daily manual checks on job and cluster performance.
B
Enable Databricks workspace auditing and conduct weekly reviews of audit logs for performance degradation indicators.
C
Set up Azure Monitor alerts by leveraging specific Spark event logs and metrics that signal performance issues.
D
Create a custom Spark listener to record performance metrics into Azure Log Analytics and establish anomaly alerts.