Google Professional Machine Learning Engineer

Google Professional Machine Learning Engineer

Get started today

Ultimate access to all questions.


You are a Machine Learning Engineer at a retail company that has deployed multiple versions of an image classification model on Google's AI Platform to categorize products in real-time. The models are critical for inventory management and have been trained on datasets reflecting seasonal product variations. The company requires a robust method to monitor and compare the performance of these model versions over time, considering factors like scalability, cost-efficiency, and the ability to handle fluctuating data distributions. Which of the following approaches is the MOST effective for comparing the performance of these model versions under the given constraints? (Choose one correct option)





Explanation:

The Continuous Evaluation feature in Google's AI Platform is specifically designed for automatically comparing the performance of different model versions over time, including metrics like mean average precision. It supports real-time monitoring and can alert to significant performance deviations, making it the most scalable, cost-efficient, and effective method under the given constraints. Manual methods and the What-If Tool, while useful, do not offer the same level of automation and scalability for continuous monitoring. For more details, refer to: https://cloud.google.com/ai-platform/prediction/docs/continuous-evaluation