
Ultimate access to all questions.
When your Databricks cluster shows unexpected performance degradation, and you suspect it's due to a complex interaction between job configurations and specific data patterns, what is the best method to diagnose this issue using custom metrics and logs?
A
Implementing a logging framework in your jobs that pushes custom metrics to Azure Log Analytics for advanced querying
B
Utilizing the REST API to export job and cluster metrics for analysis with custom Python scripts
C
Directly querying the Spark event logs stored in DBFS (Databricks File System) for custom job execution patterns
D
Relying solely on Databricks' built-in cluster metrics for troubleshooting without custom enhancements