
Ultimate access to all questions.
You are developing a custom TensorFlow model intended for online predictions. The training data resides in BigQuery, and you need to apply instance-level data transformations to this data for both model training and serving. Consistency is key, so you want to utilize the same preprocessing routine during both phases. How should you configure the preprocessing routine to achieve this consistency?
A
Create a BigQuery script to preprocess the data, and write the result to another BigQuery table.
B
Create a pipeline in Vertex AI Pipelines to read the data from BigQuery and preprocess it using a custom preprocessing component.
C
Create a preprocessing function that reads and transforms the data from BigQuery. Create a Vertex AI custom prediction routine that calls the preprocessing function at serving time.
D
Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using TensorFlow Transform and Dataflow.