Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
You have deployed multiple model versions from Vertex AI Model Registry and want to conduct A/B testing to determine the best performing model with the simplest approach. What should you do?
A
Split incoming traffic to distribute prediction requests among the versions. Monitor the performance of each version using Vertex AI's built-in monitoring tools.
B
Split incoming traffic among Google Kubernetes Engine (GKE) clusters, and use Traffic Director to distribute prediction requests to different versions. Monitor the performance of each version using Cloud Monitoring.
C
Split incoming traffic to distribute prediction requests among the versions. Monitor the performance of each version using Looker Studio dashboards that compare logged data for each version.
D
Split incoming traffic among separate Cloud Run instances of deployed models. Monitor the performance of each version using Cloud Monitoring.