
Answer-first summary for fast verification
Answer: Set up Kubeflow Pipelines to schedule your multi-step workflow, encompassing training through to model deployment, leveraging its comprehensive solution for failure handling and efficiency., Combine the use of Cloud Scheduler to trigger a Cloud Function that initiates a Kubeflow Pipeline for model retraining and deployment, ensuring both scalability and minimal downtime.
Kubeflow Pipelines (Option C) is the optimal choice for creating an end-to-end architecture, capable of scheduling multi-step workflows and deploying models efficiently, aligning with Google's best practices for scalability and minimal downtime. Combining Cloud Scheduler, Cloud Function, and Kubeflow Pipelines (Option E) offers a robust solution that ensures scalability, minimal downtime, and efficient failure handling, making it a strong alternative. While BigQuery ML (Option D) simplifies integration with data warehouses, it lacks the comprehensive workflow management of Kubeflow. Cloud Composer with Dataflow (Option A) and Cloud Functions with AI Platform (Option B) provide partial solutions but do not fully address the need for an end-to-end, scalable, and efficient workflow management system.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
As a professional working for a public transportation company, you are tasked with developing a model to predict delay times across various routes. These predictions will be delivered to users in real-time via an app. Given that seasonal changes and population growth affect data relevance, the model requires monthly retraining. The solution must adhere to Google's recommended best practices, be cost-effective, scalable, and ensure minimal downtime during retraining. Considering these constraints, how would you design the end-to-end architecture for this predictive model? Choose the two best options.
A
Utilize Cloud Composer to programmatically schedule a Dataflow job that handles the workflow from training to deploying your model, ensuring scalability and cost-effectiveness.
B
Implement a Cloud Functions script that initiates a training and deploying job on AI Platform, triggered by Cloud Scheduler, focusing on minimal downtime and ease of setup.
C
Set up Kubeflow Pipelines to schedule your multi-step workflow, encompassing training through to model deployment, leveraging its comprehensive solution for failure handling and efficiency.
D
Opt for a model trained and deployed on BigQuery ML, with retraining activated by the scheduled query feature in BigQuery, prioritizing simplicity and integration with existing data warehouses.
E
Combine the use of Cloud Scheduler to trigger a Cloud Function that initiates a Kubeflow Pipeline for model retraining and deployment, ensuring both scalability and minimal downtime.