
Answer-first summary for fast verification
Answer: Import the model into Vertex AI Model Registry. Create a Vertex AI endpoint that hosts the model, and make online inference requests.
Option D is the most suitable solution as it leverages the capabilities of Vertex AI for low-latency online inference. Vertex AI Endpoints are designed for real-time decision-making, automatically handle scaling, and ensure efficient resource allocation. This allows the game backend to send data from completed game sessions to the Vertex AI endpoint in near real-time, which promptly classifies whether a player cheated. Since Vertex AI handles the infrastructure management, it offers a managed, scalable solution for immediate predictions, which is essential to avoid further loss of revenue and maintain a good user experience.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You work at a mobile gaming startup that creates online multiplayer games. Recently, your company has observed an increase in players cheating in the games, causing a loss of revenue and negatively impacting the user experience. To address this issue, you developed a binary classification model to detect whether a player cheated after completing a game session. The plan is to send a message to other downstream systems to ban the player if cheating is detected. Your model has shown promising results during testing, and you are ready to deploy it to production. The goal is to ensure the serving solution provides immediate classifications right after a game session concludes in order to minimize revenue loss and maintain a fair gaming environment. What should you do to achieve this?
A
Import the model into Vertex AI Model Registry. Use the Vertex Batch Prediction service to run batch inference jobs.
B
Save the model files in a Cloud Storage bucket. Create a Cloud Function to read the model files and make online inference requests on the Cloud Function.
C
Save the model files in a VM. Load the model files each time there is a prediction request, and run an inference job on the VM.
D
Import the model into Vertex AI Model Registry. Create a Vertex AI endpoint that hosts the model, and make online inference requests.
No comments yet.