
Ultimate access to all questions.
You recently designed and built a custom neural network that incorporates critical dependencies specific to your organization’s framework. Your objective is to train this model using a managed training service on Google Cloud. However, you face a challenge: the ML framework and related dependencies you utilized are not supported by AI Platform Training. Compounding this challenge, both your model and the dataset are too large to fit into memory on a single machine. Given that your ML framework of choice employs a scheduler, workers, and servers distribution structure, what should you do to successfully train your model?
A
Use a built-in model available on AI Platform Training.
B
Build your custom container to run jobs on AI Platform Training.
C
Build your custom containers to run distributed training jobs on AI Platform Training.
D
Reconfigure your code to an ML framework with dependencies that are supported by AI Platform Training.