Ultimate access to all questions.
As a Machine Learning Engineer, you are tasked with executing a batch prediction on 100 million records stored in a BigQuery table. The goal is to use a custom TensorFlow DNN regressor model for prediction and subsequently store the predicted results back into a BigQuery table. Given the enormous size of the data, you need to design an efficient inference pipeline that minimizes the effort required for implementation. What approach should you take?