
Answer-first summary for fast verification
Answer: Utilizing MLflow's model registry webhooks to integrate with Azure Functions, automatically triggering a retraining pipeline based on model performance metrics
Setting up manual monitoring of model predictions versus actual outcomes (Option A) is inefficient, relying on periodic reviews which are time-consuming and prone to human error. Implementing a Databricks notebook for periodic metric calculation (Option B) is a step forward but still requires manual intervention. Leveraging Azure Machine Learning services (Option D) is viable but adds complexity. Utilizing MLflow's model registry webhooks with Azure Functions (Option C) is the most suitable strategy for continuous monitoring and automatic retraining upon detecting significant drift, ensuring timely response without manual intervention.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
After deploying a machine learning model to production using MLflow on Databricks, which strategy would you implement to continuously monitor its performance and trigger retraining upon detecting significant drift?
A
Setting up manual monitoring of model predictions versus actual outcomes, relying on periodic reviews to determine if retraining is necessary
B
Implementing a Databricks notebook that periodically calculates model performance metrics and compares them against thresholds to decide on retraining
C
Utilizing MLflow's model registry webhooks to integrate with Azure Functions, automatically triggering a retraining pipeline based on model performance metrics
D
Leveraging Azure Machine Learning services to monitor model drift and automatically retrain the model in Databricks using the MLflow tracking server
No comments yet.