
Answer-first summary for fast verification
Answer: Use the Automatic side-by-side (AutoSxS) pipeline component that processes the batch inference outputs from Cloud Storage, aggregates evaluation metrics, and writes the results to a BigQuery table., Create a custom Vertex AI Pipelines component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
The question requires an evaluation workflow that integrates with an existing Vertex AI pipeline to assess multiple LLM versions while tracking artifacts. Option D (AutoSxS) is optimal because it is specifically designed for comparing model versions in Vertex AI Pipelines, automatically aggregates evaluation metrics, and provides built-in artifact tracking with minimal development overhead. Option C (custom Vertex AI component) is also correct as it integrates with the pipeline and allows artifact tracking, but requires more development effort. Options A and B are less suitable: A (custom Python) lacks native Vertex AI integration and artifact tracking, while B (Dataflow) is over-engineered for this task and doesn't seamlessly integrate with Vertex AI Pipelines for artifact lineage.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
Your team is developing smaller, distilled LLMs for a specific domain. After performing batch inference on a dataset using several variations of your distilled LLMs and storing the outputs in Cloud Storage, you need to create an evaluation workflow. This workflow must integrate with your existing Vertex AI pipeline to assess the performance of the different LLM versions and track the resulting artifacts. What should you do?
A
Develop a custom Python component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
B
Use a Dataflow component that processes the batch inference outputs from Cloud Storage, calculates evaluation metrics in a distributed manner, and writes the results to a BigQuery table.
C
Create a custom Vertex AI Pipelines component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
D
Use the Automatic side-by-side (AutoSxS) pipeline component that processes the batch inference outputs from Cloud Storage, aggregates evaluation metrics, and writes the results to a BigQuery table.