Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
After deploying a machine learning model to production using MLflow on Databricks, which strategy would you implement to continuously monitor its performance and trigger retraining upon detecting significant drift?
A
Setting up manual monitoring of model predictions versus actual outcomes, relying on periodic reviews to determine if retraining is necessary
B
Implementing a Databricks notebook that periodically calculates model performance metrics and compares them against thresholds to decide on retraining
C
Utilizing MLflow's model registry webhooks to integrate with Azure Functions, automatically triggering a retraining pipeline based on model performance metrics
D
Leveraging Azure Machine Learning services to monitor model drift and automatically retrain the model in Databricks using the MLflow tracking server