Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
A data pipeline in Azure Databricks is experiencing intermittent failures that are challenging to diagnose. What steps would you take to enhance logging for better troubleshooting?
A
Modify the Spark jobs to include custom logging statements that capture detailed execution metrics and state.
B
Use Databricks' built-in event logging with default settings, relying on Azure Monitor for anomaly detection.
C
Enable DEBUG level logging for Spark jobs and configure Databricks to export detailed logs to Azure Log Analytics.
D
Implement a logging sidecar container in Azure Kubernetes Service (AKS) to capture stdout and stderr logs from Databricks jobs.