
Answer-first summary for fast verification
Answer: Enable DEBUG level logging for Spark jobs and configure Databricks to export detailed logs to Azure Log Analytics.
Enabling DEBUG level logging for Spark jobs provides a detailed view of the execution process, including variables, functions, and other internal processes crucial for diagnosing intermittent failures. This level captures granular details often missing in standard logs. Exporting these logs to Azure Log Analytics centralizes the information, facilitating easier search, analysis, and troubleshooting with Azure Log Analytics' powerful querying and visualization tools. Together, these steps ensure comprehensive logging for effective troubleshooting of intermittent failures in the data pipeline, offering a holistic view of system behavior to identify root causes more efficiently.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A data pipeline in Azure Databricks is experiencing intermittent failures that are challenging to diagnose. What steps would you take to enhance logging for better troubleshooting?
A
Modify the Spark jobs to include custom logging statements that capture detailed execution metrics and state.
B
Use Databricks' built-in event logging with default settings, relying on Azure Monitor for anomaly detection.
C
Enable DEBUG level logging for Spark jobs and configure Databricks to export detailed logs to Azure Log Analytics.
D
Implement a logging sidecar container in Azure Kubernetes Service (AKS) to capture stdout and stderr logs from Databricks jobs.
No comments yet.