
Answer-first summary for fast verification
Answer: Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.
Option B was chosen as the correct answer because it balances simplicity with the required performance and security measures. By embedding the client on the website and deploying the gateway on App Engine, you ensure that the system can handle web requests securely and efficiently. Deploying the model on AI Platform Prediction meets the latency requirements of 300ms at the 99th percentile. This solution does not add unnecessary complexity such as a database layer, making it the simplest approach. Options C and D introduce additional components like Cloud Bigtable or Memorystore, which are not necessary for the simplest solution.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You work for an online travel agency that also sells advertising placements on its website to other companies. Your task is to build a system to predict the most relevant web banner that a user should see next based on their navigation context. Security is a top priority for your company, and the model needs to meet a latency requirement of 300ms at the 99th percentile. The website handles thousands of web banners, and your exploratory data analysis indicates that a user's navigation context is a valuable predictor for the next banner. You want to implement the simplest solution that meets these requirements. How should you configure the prediction pipeline?
A
Embed the client on the website, and then deploy the model on AI Platform Prediction.
B
Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.
C
Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user's navigation context, and then deploy the model on AI Platform Prediction.
D
Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and for reading the user's navigation context, and then deploy the model on Google Kubernetes Engine.