
Answer-first summary for fast verification
Answer: Utilize Google Cloud AI Platform for distributed training, benefiting from its managed services and seamless integration with TensorFlow Estimators.
**Why Google Cloud AI Platform?** - **Minimal Code Refactoring:** AI Platform's API closely mirrors TensorFlow Estimators, facilitating an easy transition from local to cloud training with minimal code adjustments. - **Distributed Training:** It automatically distributes training tasks across multiple machines, enabling efficient scaling for large datasets without the need for manual cluster configuration. - **Reduced Infrastructure Overhead:** AI Platform manages all underlying infrastructure, including hardware and networking, significantly reducing setup and management tasks. While options like Dataproc, GKE with Kubeflow Pipelines, and Managed Instance Groups offer viable solutions, they require more extensive configuration and management. AI Platform stands out for its ease of use, managed services, and seamless integration with TensorFlow, making it the optimal choice for minimizing both code changes and infrastructure complexity.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are working on a project to improve the classification of customer support emails for a large e-commerce platform. Initially, you developed models using TensorFlow Estimators with small datasets on your local system. The business has now decided to scale up the operation, requiring the models to be trained with significantly larger datasets to enhance accuracy and performance. The goal is to migrate the training process to Google Cloud with minimal code changes and infrastructure overhead, ensuring a smooth transition from on-premises to cloud-based training. Given the constraints of minimizing code refactoring and infrastructure complexity, which of the following options is the BEST approach? (Choose one correct option)
A
Set up a Hadoop cluster on Google Cloud Dataproc for distributed training, leveraging its compatibility with TensorFlow.
B
Deploy the training workload on a Google Kubernetes Engine (GKE) cluster using Kubeflow Pipelines, taking advantage of its orchestration capabilities.
C
Utilize Google Cloud AI Platform for distributed training, benefiting from its managed services and seamless integration with TensorFlow Estimators.
D
Configure a Managed Instance Group with autoscaling to handle the training workload, ensuring resources are efficiently utilized based on demand.
No comments yet.