
Ultimate access to all questions.
You have developed a text classification model using TensorFlow on Google Cloud's AI Platform. Your goal is to perform batch predictions on a large dataset of text stored in BigQuery, with the constraints of minimizing computational overhead and ensuring cost efficiency. Additionally, you need to ensure that the solution is scalable and can handle the dataset's size without performance degradation. Which of the following approaches should you choose? (Choose two options if E is available)
A
Deploy the model on AI Platform and use it to serve predictions via an API, managing the batch process through custom scripts.
B
Utilize Google Cloud Dataflow to process the data from BigQuery, applying the TensorFlow SavedModel for predictions within the Dataflow pipeline.
C
Submit a batch prediction job directly on AI Platform, specifying the model's location in Cloud Storage and the BigQuery table as input.
D
Export the TensorFlow model to BigQuery ML and use SQL queries with ML.PREDICT to generate predictions directly within BigQuery.
E
Implement a combination of deploying the model on AI Platform for real-time predictions and using Dataflow for batch processing, to cover all use cases.