Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
How can you effectively set up real-time monitoring for your data pipelines in Databricks to promptly detect and alert on failures?
A
Implement custom logging within your Spark jobs to send alerts to a monitoring service like Datadog.
B
Configure Azure Event Hubs to collect pipeline logs and analyze them in real-time with Azure Stream Analytics.
C
Integrate Databricks with Azure Log Analytics and set up alerts based on specific log metrics.
D
Use Databricks‘ native event logging with email notifications for job failures.