
Answer-first summary for fast verification
Answer: Utilize the AI Platform custom containers feature to accommodate training jobs across any framework, providing the flexibility to use custom containers for any machine learning framework not natively supported.
The AI Platform custom containers feature is the most suitable option as it allows for the flexibility to use any ML framework of your choice, including those not natively supported by the AI Platform Training runtime versions. By building a custom container that includes your chosen framework, you can seamlessly run jobs on AI Platform Training. This approach is scalable, cost-effective, and integrates well with existing workflows, making it the best course of action for teams using a variety of frameworks. More details can be found [here](https://cloud.google.com/ai-platform/training/docs/containers-overview#advantages_of_custom_containers).
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
Your team of data scientists is utilizing a cloud-based backend system for submitting training jobs, which has become increasingly complex and time-consuming to manage due to the variety of machine learning frameworks in use, including Keras, PyTorch, Theano, Scikit-learn, and custom libraries. You are tasked with selecting a managed service that simplifies administration while accommodating the diverse frameworks. The solution must also consider scalability, cost-effectiveness, and ease of integration with existing workflows. Which of the following options provides the best course of action? (Choose one correct option)
A
Configure Kubeflow to operate on Google Kubernetes Engine and accept training jobs via TF Job, leveraging its compatibility with TensorFlow for streamlined operations.
B
Establish a Slurm workload manager to handle and schedule jobs on your cloud infrastructure, offering fine-grained control over resource allocation and job scheduling.
C
Utilize the AI Platform custom containers feature to accommodate training jobs across any framework, providing the flexibility to use custom containers for any machine learning framework not natively supported.
D
Develop a collection of VM images on Compute Engine and share these images in a centralized repository, enabling data scientists to deploy their preferred environments quickly.
No comments yet.