
Ultimate access to all questions.
As a professional working for an online publisher that delivers news articles to over 50 million readers, you've developed an AI model to recommend content for the weekly newsletter. The model's success is gauged by users opening the article within two days of publication and spending at least one minute on the page. The data required for this metric is updated hourly in BigQuery, with the model trained on eight weeks of data. Typically, its performance dips below the acceptable baseline after five weeks, and training takes 12 hours. To ensure the model's performance stays above the baseline while keeping costs low, what is the best method to monitor the model for determining when retraining is needed? Consider the following options and choose the one that best fits the scenario:
A
Configure a daily Dataflow job in Cloud Composer to compute the success metric, ensuring comprehensive data analysis but potentially increasing costs due to frequent computations.
B
Implement Vertex AI Model Monitoring to identify input feature skew with a 100% sample rate and a two-day monitoring interval, providing a balance between detection speed and computational efficiency.
C
Establish a weekly cron job in Cloud Tasks to retrain the model before the newsletter is compiled, which might lead to unnecessary retraining if the model's performance hasn't significantly declined.
D
Arrange for a weekly query in BigQuery to assess the success metric, which is cost-effective but may not promptly identify subtle changes in data distribution affecting model performance.
E
Combine Vertex AI Model Monitoring for early detection of performance degradation with a weekly BigQuery query for cost-effective success metric assessment, ensuring both proactive performance tracking and cost efficiency.