
Ultimate access to all questions.
Your team is working on a project that involves training a large number of machine learning (ML) models using various algorithms, parameters, and datasets. The models are being trained using two services: Vertex AI Pipelines and Vertex AI Workbench notebook instances. The goal is to compare the performance of all the models across these two services efficiently. How should you store the parameters and metrics with minimal effort to facilitate this comparison?
A
Implement an additional step for all the models running in pipelines and notebooks to export parameters and metrics to BigQuery.
B
Create a Vertex AI experiment. Submit all the pipelines as experiment runs. For models trained on notebooks, log parameters and metrics by using the Vertex AI SDK.
C
Implement all models in Vertex AI Pipelines. Create a Vertex AI experiment, and associate all pipeline runs with that experiment.
D
Store all model parameters and metrics as model metadata by using the Vertex AI Metadata API.