
Ultimate access to all questions.
You have recently deployed a machine learning model to a Vertex AI endpoint, and due to the frequent data drifts, you have enabled both request-response logging and created a Vertex AI Model Monitoring job to ensure the model's performance. However, you've noticed that the model is receiving higher-than-expected traffic, leading to increased costs. Your goal is to reduce these monitoring costs while still being able to quickly detect any data drift. How should you adjust your current setup to achieve this?
A
Replace the monitoring job with a DataFlow pipeline that uses TensorFlow Data Validation (TFDV)
B
Replace the monitoring job with a custom SQL script to calculate statistics on the features and predictions in BigQuery
C
Decrease the sample_rate parameter in the RandomSampleConfig of the monitoring job_
D
Increase the monitor_interval parameter in the ScheduleConfig of the monitoring job_