
Ultimate access to all questions.
To effectively manage costs and performance by proactively scaling your Azure Databricks clusters based on predictive analysis of workload patterns, which approach should you adopt?
A
Manually scaling clusters based on expected workload increases, guided by past trends observed in Azure Monitor
B
Configuring Azure Autoscale based on predefined metrics and thresholds without predictive analysis
C
Utilizing Azure Machine Learning to model and predict workload patterns based on historical Databricks job and cluster metrics, applying scaling decisions via Azure Automation
D
Implementing custom scripts in Azure Databricks to adjust cluster size on-the-fly based on current job queue lengths and historical execution times