
Answer-first summary for fast verification
Answer: Export the TensorFlow model to BigQuery ML and use SQL queries with ML.PREDICT to generate predictions directly within BigQuery., Deploy the model on AI Platform and use it to serve predictions via an API, managing the batch process through custom scripts.
**Correct Options** D. Export the TensorFlow model to BigQuery ML and use SQL queries with ML.PREDICT to generate predictions directly within BigQuery. This approach is the most efficient for batch predictions on text data stored in BigQuery, as it leverages BigQuery's built-in capabilities for handling large datasets and performing predictions with minimal computational overhead. It eliminates the need for additional services or infrastructure, aligning with the goal of cost efficiency and scalability. A. Deploy the model on AI Platform and use it to serve predictions via an API, managing the batch process through custom scripts. While this option provides flexibility for real-time predictions, it introduces unnecessary complexity and overhead for batch processing scenarios. It requires managing API requests and custom scripts, which can be less efficient and more costly compared to using BigQuery ML directly. **Incorrect Options** B. Utilize Google Cloud Dataflow to process the data from BigQuery, applying the TensorFlow SavedModel for predictions within the Dataflow pipeline. This method is overly complex for the task at hand, introducing additional overhead and cost without providing significant benefits over the more straightforward BigQuery ML approach. Dataflow is better suited for scenarios requiring complex data transformations or stream processing. C. Submit a batch prediction job directly on AI Platform, specifying the model's location in Cloud Storage and the BigQuery table as input. Although this method is feasible, it involves more steps and potential latency compared to using BigQuery ML directly. It requires managing input and output locations in Cloud Storage, making it less efficient for batch predictions on large datasets. E. Implement a combination of deploying the model on AI Platform for real-time predictions and using Dataflow for batch processing, to cover all use cases. This option is not the most efficient for the given scenario, as it combines two approaches that introduce unnecessary complexity and overhead for batch predictions. It's better to choose a single, more efficient method that meets the specific requirements of the task.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have developed a text classification model using TensorFlow on Google Cloud's AI Platform. Your goal is to perform batch predictions on a large dataset of text stored in BigQuery, with the constraints of minimizing computational overhead and ensuring cost efficiency. Additionally, you need to ensure that the solution is scalable and can handle the dataset's size without performance degradation. Which of the following approaches should you choose? (Choose two options if E is available)
A
Deploy the model on AI Platform and use it to serve predictions via an API, managing the batch process through custom scripts.
B
Utilize Google Cloud Dataflow to process the data from BigQuery, applying the TensorFlow SavedModel for predictions within the Dataflow pipeline.
C
Submit a batch prediction job directly on AI Platform, specifying the model's location in Cloud Storage and the BigQuery table as input.
D
Export the TensorFlow model to BigQuery ML and use SQL queries with ML.PREDICT to generate predictions directly within BigQuery.
E
Implement a combination of deploying the model on AI Platform for real-time predictions and using Dataflow for batch processing, to cover all use cases.