Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
To enhance logging for a complex data pipeline in Databricks, ensuring comprehensive tracking of data flow, transformations, and bottleneck identification, which approach would you choose?
A
Implement custom logging within each component of the pipeline, storing logs in Delta Lake for querying and analysis.
B
Enable Diagnostic Logging in Azure Databricks and stream logs to Azure Log Analytics workspace.
C
Utilize Databricks native logging features and integrate with Azure Monitor for centralized log management and analysis.
D
Rely on Spark‘s built-in UI for monitoring pipeline executions without additional logging.