
Ultimate access to all questions.
Your team is developing smaller, distilled LLMs for a specific domain. After performing batch inference on a dataset using several variations of your distilled LLMs and storing the outputs in Cloud Storage, you need to create an evaluation workflow. This workflow must integrate with your existing Vertex AI pipeline to assess the performance of the different LLM versions and track the resulting artifacts. What should you do?
A
Develop a custom Python component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
B
Use a Dataflow component that processes the batch inference outputs from Cloud Storage, calculates evaluation metrics in a distributed manner, and writes the results to a BigQuery table.
C
Create a custom Vertex AI Pipelines component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
D
Use the Automatic side-by-side (AutoSxS) pipeline component that processes the batch inference outputs from Cloud Storage, aggregates evaluation metrics, and writes the results to a BigQuery table.