
Ultimate access to all questions.
You deployed an ML model into production a year ago to serve predictions for a business-critical application. To ensure the model remains accurate, you have been collecting all raw requests sent to your model prediction service each month and sending a subset of these requests to a human labeling service to evaluate your model’s performance. Over the past year, you have observed varying patterns in model degradation—sometimes the performance drops significantly within a month, while other times it takes several months to notice such decreases. Given that the human labeling service is expensive, you need a strategy to balance the frequency of retraining the model to maintain high performance without incurring unnecessary costs. What should you do?
A
Train an anomaly detection model on the training dataset, and run all incoming requests through this model. If an anomaly is detected, send the most recent serving data to the labeling service.
B
Identify temporal patterns in your model’s performance over the previous year. Based on these patterns, create a schedule for sending serving data to the labeling service for the next year.
C
Compare the cost of the labeling service with the lost revenue due to model performance degradation over the past year. If the lost revenue is greater than the cost of the labeling service, increase the frequency of model retraining; otherwise, decrease the model retraining frequency.
D
Run training-serving skew detection batch jobs every few days to compare the aggregate statistics of the features in the training dataset with recent serving data. If skew is detected, send the most recent serving data to the labeling service.