
Ultimate access to all questions.
You are a Machine Learning Engineer at a retail company that has deployed multiple versions of an image classification model on Google's AI Platform to categorize products in real-time. The models are critical for inventory management and have been trained on datasets reflecting seasonal product variations. The company requires a robust method to monitor and compare the performance of these model versions over time, considering factors like scalability, cost-efficiency, and the ability to handle fluctuating data distributions. Which of the following approaches is the MOST effective for comparing the performance of these model versions under the given constraints? (Choose one correct option)
A
Manually evaluate the loss performance of each model using a held-out dataset every month, ensuring the dataset is updated to reflect current product trends.
B
Use the validation dataset to assess the loss performance of each model, adjusting the validation set periodically to account for new product categories.
C
Implement the What-If Tool to visually compare the receiver operating characteristic (ROC) curves for each model, focusing on areas with high classification uncertainty.
D
Leverage the Continuous Evaluation feature in AI Platform to automatically compare the mean average precision across the models, enabling real-time performance tracking and alerts for significant deviations.
E
Combine both manual evaluation of loss performance on a held-out dataset and the use of the What-If Tool for ROC curve comparison to ensure comprehensive performance monitoring.