
Answer-first summary for fast verification
Answer: Azure Machine Learning to model workload patterns and Azure Function to adjust Databricks cluster size based on predictions
The optimal solution for predictive scaling based on performance metrics involves: 1. **Azure Machine Learning**: This service analyzes historical performance metrics and workload patterns to create predictive models. These models forecast future workload patterns and resource needs for Databricks clusters. 2. **Azure Function**: It triggers the scaling of Databricks clusters based on predictions from Azure Machine Learning. Automating the adjustment of cluster size ensures efficient handling of variable workloads. This approach leverages machine learning for data-driven predictions and automates scaling, optimizing resource use and cost-effectiveness for Databricks clusters.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
To efficiently manage variable workloads by implementing predictive scaling for your Databricks clusters based on historical performance metrics, which combination of Azure services and features would you choose?
A
Azure Logic Apps to monitor Databricks metrics and scale clusters based on predefined rules
B
Utilizing Databricks REST API to dynamically scale clusters in response to Azure Log Analytics alerts
C
Azure Machine Learning to model workload patterns and Azure Function to adjust Databricks cluster size based on predictions
D
Azure Monitor Autoscale to automatically scale Databricks clusters based on CPU and memory utilization
No comments yet.