
Ultimate access to all questions.
You are working on training a deep learning model using TensorFlow on a Google Cloud Platform (GCP) instance equipped with GPUs. The training data is distributed across multiple large files, and you've noticed that the GPU utilization is lower than expected due to the input pipeline's execution time. Your primary goal is to optimize the input pipeline to fully leverage the GPU's capabilities without increasing the overall cost. Which of the following strategies would be the most effective to achieve this goal? (Choose one)
A
Upgrade the network bandwidth to speed up data transfer rates.
B
Implement data caching within the pipeline to avoid reprocessing the same data.
C
Introduce parallel interleaving to the pipeline to read and process multiple files simultaneously.
D
Increase the CPU utilization to process more data in parallel.