
Ultimate access to all questions.
You have recently developed a wide and deep model using TensorFlow for generating daily recommendations. To prepare your training datasets, you employed a SQL script to preprocess raw data from BigQuery by performing instance-level transformations. You now need to create a robust training pipeline that will automatically retrain the model on a weekly basis. Considering the need to minimize model development and training time, what approach should you take to develop the training pipeline?
A
Use the KubeFlow Pipelines SDK to implement the pipeline. Use the BigQueryJobOp component to run the preprocessing script and the CustomTrainingJobOp component to launch a Vertex AI training job.
B
Use the KubeFlow Pipelines SDK to implement the pipeline. Use the DataFlowPythonJobOp component to preprocess the data and the CustomTrainingJobOp component to launch a Vertex AI training job.
C
Use the TensorFlow Extended SDK to implement the pipeline. Use the ExampleGen component with the BigQuery executor to ingest the data the Transform component to preprocess the data, and the Trainer component to launch a Vertex AI training job.
D
Use the TensorFlow Extended SDK to implement the pipeline. Implement the preprocessing steps as part of the input_fn of the model. Use the ExampleGen component with the BigQuery executor to ingest the data and the Trainer component to launch a Vertex AI training job._