
Answer-first summary for fast verification
Answer: Adjust the input function to enable parallel reads, parallel processing, and prefetch to optimize data pipeline efficiency.
The correct answer is to adjust the input function to enable parallel reads, parallel processing, and prefetch. This approach is cost-effective because it addresses the bottleneck in the data processing pipeline without the need for more expensive hardware upgrades. By optimizing the data pipeline, the model can be trained more efficiently, significantly reducing the training time while staying within budget constraints.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are training an object detection model using a Cloud TPU v2 in a project with tight budget constraints and a deadline approaching. The training process is taking significantly longer than expected, and initial profiling indicates that the input data processing is a bottleneck. You need to implement a cost-effective solution that reduces training time without requiring expensive hardware upgrades. Which of the following measures would be the BEST to address this issue? (Choose one)
A
Switch from Cloud TPU v2 to Cloud TPU v3 and increase the batch size, despite the higher cost.
B
Adjust the input function to enable parallel reads, parallel processing, and prefetch to optimize data pipeline efficiency.
C
Replace the Cloud TPU v2 with 8 NVIDIA V100 GPUs and increase the batch size, considering the higher performance of GPUs for certain workloads.
D
Change the input function to resize and reshape the input images, potentially reducing the data size but possibly affecting model accuracy.
No comments yet.