
Ultimate access to all questions.
As a Machine Learning Engineer, you are tasked with executing a batch prediction on 100 million records stored in a BigQuery table. The goal is to use a custom TensorFlow DNN regressor model for prediction and subsequently store the predicted results back into a BigQuery table. Given the enormous size of the data, you need to design an efficient inference pipeline that minimizes the effort required for implementation. What approach should you take?
A
Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
B
Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery.
C
Create a DataFlow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery.
D
Load the TensorFlow SavedModel in a DataFlow pipeline. Use the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and write the results to BigQuery.