
Answer-first summary for fast verification
Answer: Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
Option A is the correct answer. Importing the TensorFlow model with BigQuery ML and running the ml.predict function is the most efficient approach for this scenario. This solution leverages the integration between BigQuery ML and TensorFlow, allowing for seamless and scalable batch predictions directly within the BigQuery environment. This minimizes the effort required to build a complex pipeline involving multiple services and custom functions.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
As a Machine Learning Engineer, you are tasked with executing a batch prediction on 100 million records stored in a BigQuery table. The goal is to use a custom TensorFlow DNN regressor model for prediction and subsequently store the predicted results back into a BigQuery table. Given the enormous size of the data, you need to design an efficient inference pipeline that minimizes the effort required for implementation. What approach should you take?
A
Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
B
Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery.
C
Create a DataFlow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery.
D
Load the TensorFlow SavedModel in a DataFlow pipeline. Use the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and write the results to BigQuery.