
Ultimate access to all questions.
You have successfully deployed to production a large and complex TensorFlow model trained on tabular data to predict the lifetime value (LTV) field for each subscription stored in the BigQuery table named subscriptionPurchase in the project named my-fortune500-company-project. All your training code, including preprocessing data from the BigQuery table and deploying the validated model to the Vertex AI endpoint, has been organized into a TensorFlow Extended (TFX) pipeline. Given that prediction drift occurs when feature data distributions in production change significantly over time, you want to prevent this from happening. What should you do?
A
Implement continuous retraining of the model daily using Vertex AI Pipelines.
B
Add a model monitoring job where 10% of incoming predictions are sampled every 24 hours.
C
Add a model monitoring job where 90% of incoming predictions are sampled every 24 hours.
D
Add a model monitoring job where 10% of incoming predictions are sampled every hour.