
Answer-first summary for fast verification
Answer: Implement Vertex AI Model Monitoring to identify input feature skew with a 100% sample rate and a two-day monitoring interval, providing a balance between detection speed and computational efficiency., Combine Vertex AI Model Monitoring for early detection of performance degradation with a weekly BigQuery query for cost-effective success metric assessment, ensuring both proactive performance tracking and cost efficiency.
**Correct Answers: B and E** Vertex AI Model Monitoring (Option B) offers real-time insights into model performance, enabling early detection of shifts in input data distribution and model behavior. Monitoring input feature skew helps pinpoint when the training data diverges significantly from prediction data, signaling potential performance degradation. A 100% sample rate ensures all data points are analyzed for a thorough performance review, while a two-day interval balances detection speed with computational efficiency. Combining Vertex AI Model Monitoring with a weekly BigQuery query (Option E) provides a comprehensive approach, leveraging the strengths of both methods for proactive performance tracking and cost efficiency. This dual approach ensures that the model remains effective and aligned with user needs while minimizing unnecessary computational costs. Other options present limitations: - **Dataflow job (Option A)**: Though capable of calculating the success metric, it lacks the real-time efficiency of Vertex AI Model Monitoring and may increase costs. - **Cron job (Option C)**: Weekly retraining may be unnecessarily frequent if performance isn't notably declining, leading to higher costs without guaranteed benefits. - **BigQuery query alone (Option D)**: While useful for performance insights, it's less adept at identifying subtle changes in data distribution promptly.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
As a professional working for an online publisher that delivers news articles to over 50 million readers, you've developed an AI model to recommend content for the weekly newsletter. The model's success is gauged by users opening the article within two days of publication and spending at least one minute on the page. The data required for this metric is updated hourly in BigQuery, with the model trained on eight weeks of data. Typically, its performance dips below the acceptable baseline after five weeks, and training takes 12 hours. To ensure the model's performance stays above the baseline while keeping costs low, what is the best method to monitor the model for determining when retraining is needed? Consider the following options and choose the one that best fits the scenario:
A
Configure a daily Dataflow job in Cloud Composer to compute the success metric, ensuring comprehensive data analysis but potentially increasing costs due to frequent computations.
B
Implement Vertex AI Model Monitoring to identify input feature skew with a 100% sample rate and a two-day monitoring interval, providing a balance between detection speed and computational efficiency.
C
Establish a weekly cron job in Cloud Tasks to retrain the model before the newsletter is compiled, which might lead to unnecessary retraining if the model's performance hasn't significantly declined.
D
Arrange for a weekly query in BigQuery to assess the success metric, which is cost-effective but may not promptly identify subtle changes in data distribution affecting model performance.
E
Combine Vertex AI Model Monitoring for early detection of performance degradation with a weekly BigQuery query for cost-effective success metric assessment, ensuring both proactive performance tracking and cost efficiency.