Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
You have deployed an ML model to a Vertex AI endpoint and set up a Vertex AI Model Monitoring job. To continuously evaluate the model by monitoring for feature attribution drift, what steps should you take?
A
Set up alerts using Cloud Logging, and use the Vertex AI console to review feature attributions.
B
Set up alerts using Cloud Logging, and use Looker Studio to create a dashboard that visualizes feature attribution drift. Review the dashboard periodically.
C
Enable request-response logging for the Vertex AI endpoint, and set up alerts using Pub/Sub. Create a Cloud Run function to run TensorFlow Data Validation on your dataset.
D
Enable request-response logging for the Vertex AI endpoint, and set up alerts using Cloud Logging. Review the feature attributions in the Google Cloud console when an alert is received.