
Answer-first summary for fast verification
Answer: Send incoming prediction requests to a Pub/Sub topic. Transform the incoming data using a Dataflow job. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.
The correct answer is B. This option leverages Pub/Sub for scalable message queuing and Dataflow for efficient and scalable data transformation, which is well-suited for handling computationally expensive preprocessing operations. By transforming the incoming data using Dataflow and then submitting a prediction request to AI Platform, this architecture ensures high throughput and maintains consistent preprocessing at both training and prediction times. Other options either do not offer the required scalability, such as using Cloud Functions for heavy processing (Option D), or are fundamentally incorrect approaches like creating a new model that uses raw data (Option A) and streaming data into Cloud Spanner for preprocessing (Option C).
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have trained a machine learning model on a dataset that required computationally expensive preprocessing operations, such as feature extraction and data normalization. Now, you need to execute the same preprocessing steps at prediction time to ensure consistency in the data pipeline. You have deployed this model on Google AI Platform with the goal of achieving high-throughput online prediction. Considering the need for efficient and scalable preprocessing, which architecture should you use?
A
Validate the accuracy of the model that you trained on preprocessed data. Create a new model that uses the raw data and is available in real time. Deploy the new model onto AI Platform for online prediction.
B
Send incoming prediction requests to a Pub/Sub topic. Transform the incoming data using a Dataflow job. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.
C
Stream incoming prediction request data into Cloud Spanner. Create a view to abstract your preprocessing logic. Query the view every second for new records. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.
D
Send incoming prediction requests to a Pub/Sub topic. Set up a Cloud Function that is triggered when messages are published to the Pub/Sub topic. Implement your preprocessing logic in the Cloud Function. Submit a prediction request to AI Platform using the transformed data. Write the predictions to an outbound Pub/Sub queue.