
Ultimate access to all questions.
You work as a machine learning engineer and have developed a custom model using Vertex AI to predict your application's user churn rate. For maintaining model performance, you employ Vertex AI Model Monitoring for skew detection. The training data, stored in BigQuery, contains two sets of features: demographic and behavioral. After some analysis, you discover that two separate models trained on each feature set individually perform better than the original combined model. Given this, you need to configure a new model monitoring pipeline that can split traffic among the two models. It is crucial that both models adhere to the same prediction-sampling rate and monitoring frequency while minimizing the management effort required. What should you do?
A
Keep the training dataset as is. Deploy the models to two separate endpoints, and submit two Vertex AI Model Monitoring jobs with appropriately selected feature-thresholds parameters.
B
Keep the training dataset as is. Deploy both models to the same endpoint and submit a Vertex AI Model Monitoring job with a monitoring-config-from-file parameter that accounts for the model IDs and feature selections.
C
Separate the training dataset into two tables based on demographic and behavioral features. Deploy the models to two separate endpoints, and submit two Vertex AI Model Monitoring jobs.
D
Separate the training dataset into two tables based on demographic and behavioral features. Deploy both models to the same endpoint, and submit a Vertex AI Model Monitoring job with a monitoring-config-from-file parameter that accounts for the model IDs and training datasets.