Ultimate access to all questions.
Upgrade Now š
Sign in to unlock AI tutor
After deploying a machine learning model into production within Azure Databricks, how can you ensure it scales effectively with increasing data volumes and user requests?
A
Implementing custom scalability testing scripts within Databricks notebooks that incrementally increase data load and request rates, monitoring model performance and resource utilization
B
Leveraging Databricksā MLflow for model tracking, combined with Azure Kubernetes Service (AKS) to simulate scalable deployment scenarios
C
Conducting A/B testing with varying sizes of data inputs and concurrent requests, using Azure Event Hubs to generate traffic and Azure Monitor for performance insights
D
Utilizing Azure Machine Learningās model deployment and management capabilities to simulate load scenarios and gather performance metrics for analysis