
Answer-first summary for fast verification
Answer: Update the WorkerPoolSpec to use a machine with 24 vCPUs and 3 NVIDIA Tesla V100 GPUs.
The question aims to reduce training time without compromising model accuracy. Option D (using 3 NVIDIA Tesla V100 GPUs) is optimal because it enables parallel training across multiple GPUs, significantly speeding up the process while maintaining the same model architecture and dataset, thus preserving accuracy. Option C (using 1 GPU) offers minimal improvement over the current setup. Option A (reducing layers) would likely degrade accuracy by simplifying the model. Option B (training on a stratified subset) reduces data volume, which could harm accuracy despite faster training. The community discussion, though brief, shows 100% consensus on C, but D is superior for leveraging multiple GPUs for parallel computation without altering the model or data.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You have built a custom Vertex AI pipeline that preprocesses images and trains an object detection model. The pipeline currently uses a single n1-standard-8 machine with one NVIDIA Tesla V100 GPU. Your goal is to reduce the model training time without compromising model accuracy. What should you do?
A
Reduce the number of layers in your object detection model.
B
Train the same model on a stratified subset of your dataset.
C
Update the WorkerPoolSpec to use a machine with 24 vCPUs and 1 NVIDIA Tesla V100 GPU.
D
Update the WorkerPoolSpec to use a machine with 24 vCPUs and 3 NVIDIA Tesla V100 GPUs.
No comments yet.