
Answer-first summary for fast verification
Answer: Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user’s navigation context, and then deploy the model on AI Platform Prediction.
The correct answer is C. Given the requirements, Cloud Bigtable is suitable for handling thousands of web banners and providing the necessary low latency for dynamically changing user browsing contexts. Key phrases in the problem statement such as 'the inventory is thousands of web banners' points to a database that can handle such scale effectively. While Firestore is easier to implement, it may not meet the stringent latency requirements as efficiently as Bigtable. Therefore, option C, which uses Cloud Bigtable for writing and reading the user's navigation context and deploys the model on AI Platform Prediction, is the most appropriate solution.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You work for an online travel agency that generates revenue not only through bookings but also by selling advertising placements on its website to other companies. Your task is to predict and display the most relevant web banner to users as they navigate the site. Ensuring security is crucial for your company. Given that the model's latency requirements are 300ms@p99, with an inventory of thousands of web banners, and your exploratory analysis indicates that users' navigation context is a strong predictor for ad relevance, you now need to implement the simplest solution. How should you configure the prediction pipeline?
A
Embed the client on the website, and then deploy the model on AI Platform Prediction.
B
Embed the client on the website, deploy the gateway on App Engine, deploy the database on Firestore for writing and for reading the user’s navigation context, and then deploy the model on AI Platform Prediction.
C
Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user’s navigation context, and then deploy the model on AI Platform Prediction.
D
Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and for reading the user’s navigation context, and then deploy the model on Google Kubernetes Engine.