
Answer-first summary for fast verification
Answer: Using AI Platform Prediction, a fully managed service that supports online predictions and automatically scales based on demand.
AI Platform Prediction is the optimal choice as it is a fully managed service that automatically scales machine learning models in the cloud, supporting both online and batch predictions without the need for manual infrastructure management. Options A and B are less ideal because they require manual setup for scaling and load balancing, increasing operational complexity. Option D, Kubeflow, is not a managed service and is more suited for deploying ML systems across various environments rather than providing a simple, scalable solution for online predictions. For more details, refer to the following resources: - [Scaling machine learning predictions](https://cloud.google.com/blog/products/ai-machine-learning/scaling-machine-learning-predictions) - [AI Platform Prediction overview](https://cloud.google.com/ai-platform/prediction/docs/overview) - [Cook your own ML recipes on AI Platform](https://cloud.google.com/blog/topics/developers-practitioners/cook-your-own-ml-recipes-ai-platform)
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
Your team is developing an online forecasting model that needs to be deployed in production. This model must seamlessly integrate with a web interface, DialogFlow, and Google Assistant, catering to a global audience with the anticipation of high request volumes. Given the constraints of ensuring high efficiency, scalability, and minimal operational overhead, you are tasked with selecting the most suitable GCP solution. The solution should be fully managed to reduce the complexity of infrastructure management and automatically scale to handle varying loads. Considering these requirements, which of the following options best meets your needs? Choose the best option.
A
Deploying the model on Google Kubernetes Engine (GKE) with TensorFlow serving, requiring manual setup for autoscaling and load balancing.
B
Utilizing Virtual Machines (VMs) with Autoscaling Groups and an Application Load Balancer, necessitating manual configuration for scaling and management.
C
Using AI Platform Prediction, a fully managed service that supports online predictions and automatically scales based on demand.
D
Implementing Kubeflow for deploying the model, which involves setting up and managing the ML workflow across different environments.
No comments yet.