
Ultimate access to all questions.
As a professional working for a public transportation company, you are tasked with developing a model to predict delay times across various routes. These predictions will be delivered to users in real-time via an app. Given that seasonal changes and population growth affect data relevance, the model requires monthly retraining. The solution must adhere to Google's recommended best practices, be cost-effective, scalable, and ensure minimal downtime during retraining. Considering these constraints, how would you design the end-to-end architecture for this predictive model? Choose the two best options.
A
Utilize Cloud Composer to programmatically schedule a Dataflow job that handles the workflow from training to deploying your model, ensuring scalability and cost-effectiveness.
B
Implement a Cloud Functions script that initiates a training and deploying job on AI Platform, triggered by Cloud Scheduler, focusing on minimal downtime and ease of setup.
C
Set up Kubeflow Pipelines to schedule your multi-step workflow, encompassing training through to model deployment, leveraging its comprehensive solution for failure handling and efficiency.
D
Opt for a model trained and deployed on BigQuery ML, with retraining activated by the scheduled query feature in BigQuery, prioritizing simplicity and integration with existing data warehouses.
E
Combine the use of Cloud Scheduler to trigger a Cloud Function that initiates a Kubeflow Pipeline for model retraining and deployment, ensuring both scalability and minimal downtime.