
Answer-first summary for fast verification
Answer: Import the model into BigQuery ML. Make predictions using batch reading data from BigQuery, and push the data to Cloud SQL
The correct answer is A: Import the model into BigQuery ML. Make predictions using batch reading data from BigQuery, and push the data to Cloud SQL. This method leverages BigQuery ML to make predictions directly within BigQuery, reducing the need for complex model deployment and serving architectures. It also allows for easy and cost-effective batch processing of predictions. This approach optimizes cost by using existing infrastructure, enhances user experience by leveraging batch predictions to adapt game experience periodically, and simplifies management through integration within BigQuery.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You work for a gaming company that develops massively multiplayer online (MMO) games. You have built a TensorFlow model to predict whether players will make in-app purchases of more than $10 within the next two weeks. The main objective is to use these predictions to adapt and enhance each user’s in-game experience effectively. The gaming user data, which includes gameplay statistics and purchasing behavior, is stored in BigQuery. Given the requirement to optimize for cost, user experience, and ease of management, how should you serve your model predictions to best meet these criteria?
A
Import the model into BigQuery ML. Make predictions using batch reading data from BigQuery, and push the data to Cloud SQL
B
Deploy the model to Vertex AI Prediction. Make predictions using batch reading data from Cloud Bigtable, and push the data to Cloud SQL.
C
Embed the model in the mobile application. Make predictions after every in-app purchase event is published in Pub/Sub, and push the data to Cloud SQL.
D
Embed the model in the streaming Dataflow pipeline. Make predictions after every in-app purchase event is published in Pub/Sub, and push the data to Cloud SQL.
No comments yet.