
Answer-first summary for fast verification
Answer: Leverage the Continuous Evaluation feature in AI Platform to automatically compare the mean average precision across the models, enabling real-time performance tracking and alerts for significant deviations.
The Continuous Evaluation feature in Google's AI Platform is specifically designed for automatically comparing the performance of different model versions over time, including metrics like mean average precision. It supports real-time monitoring and can alert to significant performance deviations, making it the most scalable, cost-efficient, and effective method under the given constraints. Manual methods and the What-If Tool, while useful, do not offer the same level of automation and scalability for continuous monitoring. For more details, refer to: https://cloud.google.com/ai-platform/prediction/docs/continuous-evaluation
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You are a Machine Learning Engineer at a retail company that has deployed multiple versions of an image classification model on Google's AI Platform to categorize products in real-time. The models are critical for inventory management and have been trained on datasets reflecting seasonal product variations. The company requires a robust method to monitor and compare the performance of these model versions over time, considering factors like scalability, cost-efficiency, and the ability to handle fluctuating data distributions. Which of the following approaches is the MOST effective for comparing the performance of these model versions under the given constraints? (Choose one correct option)
A
Manually evaluate the loss performance of each model using a held-out dataset every month, ensuring the dataset is updated to reflect current product trends.
B
Use the validation dataset to assess the loss performance of each model, adjusting the validation set periodically to account for new product categories.
C
Implement the What-If Tool to visually compare the receiver operating characteristic (ROC) curves for each model, focusing on areas with high classification uncertainty.
D
Leverage the Continuous Evaluation feature in AI Platform to automatically compare the mean average precision across the models, enabling real-time performance tracking and alerts for significant deviations.
E
Combine both manual evaluation of loss performance on a held-out dataset and the use of the What-If Tool for ROC curve comparison to ensure comprehensive performance monitoring.