
Answer-first summary for fast verification
Answer: After adding GPU kernel support for your custom operations, switch to Cloud GPUs.
The correct answer is **C**. To effectively use GPUs, you need to implement GPU kernels for your custom operations, allowing the workload to be offloaded to the GPU, which speeds up training. Option A is incorrect because TPUs require kernel support for custom operations; you can't just switch to TPUs without modifications. Option B is incorrect because TPU kernel support is not the same as GPU kernel support; GPU kernels won't run on TPUs. Option D is the least efficient; while scaling CPUs might reduce time, it's not cost-effective and is slower than using accelerators.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
Your team is working on a specialized image recognition product that relies on custom C++ TensorFlow operations for complex matrix multiplications during the training loop. Currently, training takes several days. You're looking to significantly reduce this time and maintain cost efficiency by using an accelerator on Google Cloud. What's the best approach to achieve this?
A
Switch to Cloud TPUs without modifying your existing code.
B
After adding GPU kernel support for your custom operations, switch to Cloud TPUs.
C
After adding GPU kernel support for your custom operations, switch to Cloud GPUs.
D
Continue using CPUs but increase the size of your training cluster.
No comments yet.