Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
When encountering complex, intermittent performance issues in a real-time data processing pipeline utilizing Azure Databricks and Azure Event Hubs, which strategy would you employ for real-time diagnosis?
A
Implementing custom telemetry in the Databricks notebooks and Event Hubs capture functions to log detailed performance data to Azure Log Analytics
B
Using the Databricks Spark UI and Event Hubs metrics in Azure Portal for manual correlation of performance issues
C
Setting up Azure Monitor with Application Insights to track performance metrics and dependencies in real-time, utilizing live metrics stream
D
Streaming system and application metrics to Azure Time Series Insights for real-time analysis and anomaly detection