
Answer-first summary for fast verification
Answer: Build your custom containers to run distributed training jobs on AI Platform Training.
The correct answer is C: 'Build your custom containers to run distributed training jobs on AI Platform Training.' This solution allows you to use ML frameworks, non-ML dependencies, libraries, and binaries that are not otherwise supported on AI Platform Training. Additionally, because both your model and dataset are too large to fit into memory on a single machine, employing distributed training will enable you to handle the data volume effectively. By creating custom containers, you can configure the training environment to meet your specific requirements, facilitating the use of your organization’s custom dependencies.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You recently designed and built a custom neural network that incorporates critical dependencies specific to your organization’s framework. Your objective is to train this model using a managed training service on Google Cloud. However, you face a challenge: the ML framework and related dependencies you utilized are not supported by AI Platform Training. Compounding this challenge, both your model and the dataset are too large to fit into memory on a single machine. Given that your ML framework of choice employs a scheduler, workers, and servers distribution structure, what should you do to successfully train your model?
A
Use a built-in model available on AI Platform Training.
B
Build your custom container to run jobs on AI Platform Training.
C
Build your custom containers to run distributed training jobs on AI Platform Training.
D
Reconfigure your code to an ML framework with dependencies that are supported by AI Platform Training.