
Answer-first summary for fast verification
Answer: Configure a n1-standard-4 VM with 4 NVIDIA P100 GPUs. SSH into the VM and use MultiWorkerMirroredStrategy to train the model.
Option D is the correct answer. Using a n1-standard-4 VM with 4 NVIDIA P100 GPUs, you can leverage the MultiWorkerMirroredStrategy for training the model. This strategy uses synchronous distributed training across multiple workers, making it easier to inspect intermediate states and variables during debugging. While TPUs can offer faster training, they can also be more challenging to debug due to limitations in tools and visualization support. MultiWorkerMirroredStrategy, on the other hand, maintains a relatively straightforward environment for debugging while efficiently distributing the workload across GPUs to speed up training time.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are collaborating with a team of researchers to develop cutting-edge algorithms for financial analysis. Your team frequently develops and debugs intricate TensorFlow models. You aim to balance the ease of debugging with the need to reduce model training time. Considering these requirements, how should you configure your training environment?
A
Configure a v3-8 TPU VM. SSH into the VM to train and debug the model.
B
Configure a v3-8 TPU node. Use Cloud Shell to SSH into the Host VM to train and debug the model.
C
Configure a n1-standard-4 VM with 4 NVIDIA P100 GPUs. SSH into the VM and use ParameterServerStrategy to train the model.
D
Configure a n1-standard-4 VM with 4 NVIDIA P100 GPUs. SSH into the VM and use MultiWorkerMirroredStrategy to train the model.
No comments yet.