Ultimate access to all questions.
You are tasked with training a custom language model for your company using a large dataset. To handle the computational load effectively, you decide to employ the Reduction Server strategy on Google's Vertex AI, which helps optimize bandwidth and latency for multi-node distributed training. You need to configure the worker pools for this distributed training job on Vertex AI. What configuration should you choose for the worker pools to ensure optimal performance?