
Ultimate access to all questions.
You have developed a Transformer model in TensorFlow for text translation. Your training data comprises millions of documents stored in a Cloud Storage bucket. To reduce training time, you aim to use distributed training. Additionally, you want to minimize the effort required for modifying the existing code and managing the cluster's configuration. Given these requirements and the need to effectively handle large-scale data, which approach should you choose?
A
Create a Vertex AI custom training job with GPU accelerators for the second worker pool. Use tf.distribute.MultiWorkerMirroredStrategy for distribution.
B
Create a Vertex AI custom distributed training job with Reduction Server. Use N1 high-memory machine type instances for the first and second pools, and use N1 high-CPU machine type instances for the third worker pool.
C
Create a training job that uses Cloud TPU VMs. Use tf.distribute.TPUStrategy for distribution.
D
Create a Vertex AI custom training job with a single worker pool of A2 GPU machine type instances. Use tf.distribute.MirroredStrategy for distribution.