
Answer-first summary for fast verification
Answer: Send incoming prediction requests to a Pub/Sub topic. Set up a Cloud Function that is triggered when messages are published to the Pub/Sub topic. Implement your preprocessing logic in the Cloud Function. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.
Option C is the most suitable architecture for this scenario because it leverages Cloud Functions for preprocessing, which automatically scales with the volume of incoming requests, ensuring scalability and cost-effectiveness. Being serverless, it eliminates the need for infrastructure management, reducing operational overhead. The architecture also maintains low latency by processing data in near real-time, crucial for high-throughput online predictions. Other options are less optimal due to reasons such as the inefficiency and potential high costs associated with Cloud Spanner (A), the unnecessary complexity and cost of retraining a model with raw data (B), and the higher latency typically associated with Dataflow jobs (D).
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
In the context of deploying a machine learning model on Google's AI Platform for high-throughput online predictions, where the model requires computationally expensive preprocessing operations identical to those used during training, which architecture ensures scalability, cost-effectiveness, and low latency? Consider the following constraints: the solution must handle a high volume of requests in real-time, minimize operational overhead, and avoid the need for retraining the model with raw data. Choose the best option from the following:
A
Stream incoming prediction request data into Cloud Spanner. Create a view to abstract your preprocessing logic. Query the view every second for new records. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.
B
Validate the accuracy of the model that you trained on preprocessed data. Create a new model that uses the raw data and is available in real time. Deploy the new model onto AI Platform for online prediction.
C
Send incoming prediction requests to a Pub/Sub topic. Set up a Cloud Function that is triggered when messages are published to the Pub/Sub topic. Implement your preprocessing logic in the Cloud Function. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.
D
Send incoming prediction requests to a Pub/Sub topic. Transform the incoming data using a Dataflow job. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.