
Answer-first summary for fast verification
Answer: Use the KubeFlow Pipelines SDK to implement the pipeline. Use the BigQueryJobOp component to run the preprocessing script and the CustomTrainingJobOp component to launch a Vertex AI training job.
The correct answer is A: 'Use the KubeFlow Pipelines SDK to implement the pipeline. Use the BigQueryJobOp component to run the preprocessing script and the CustomTrainingJobOp component to launch a Vertex AI training job.' This approach leverages both KubeFlow Pipelines for orchestrating steps and existing BigQuery jobs to handle preprocessing, minimizing complexity and ensuring efficiency. BigQueryJobOp allows you to directly use the SQL script for preprocessing without additional overhead, and CustomTrainingJobOp will facilitate launching training tasks on Vertex AI, thereby streamlining the process.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have recently developed a wide and deep model using TensorFlow for generating daily recommendations. To prepare your training datasets, you employed a SQL script to preprocess raw data from BigQuery by performing instance-level transformations. You now need to create a robust training pipeline that will automatically retrain the model on a weekly basis. Considering the need to minimize model development and training time, what approach should you take to develop the training pipeline?
A
Use the KubeFlow Pipelines SDK to implement the pipeline. Use the BigQueryJobOp component to run the preprocessing script and the CustomTrainingJobOp component to launch a Vertex AI training job.
B
Use the KubeFlow Pipelines SDK to implement the pipeline. Use the DataFlowPythonJobOp component to preprocess the data and the CustomTrainingJobOp component to launch a Vertex AI training job.
C
Use the TensorFlow Extended SDK to implement the pipeline. Use the ExampleGen component with the BigQuery executor to ingest the data the Transform component to preprocess the data, and the Trainer component to launch a Vertex AI training job.
D
Use the TensorFlow Extended SDK to implement the pipeline. Implement the preprocessing steps as part of the input_fn of the model. Use the ExampleGen component with the BigQuery executor to ingest the data and the Trainer component to launch a Vertex AI training job.