
Ultimate access to all questions.
You are working on a specialized product within the image recognition domain and your team has developed a model heavily reliant on customized C++ TensorFlow operations. These operations are integral to your main training loop, primarily conducting extensive matrix multiplications. Presently, the model training process can extend over several days. Your objective is to considerably reduce this training time and maintain low costs through the use of a Google Cloud accelerator. What actions should you take?
A
Use Cloud TPUs without any additional adjustment to your code.
B
Use Cloud TPUs after implementing GPU kernel support for your customs ops.
C
Use Cloud GPUs after implementing GPU kernel support for your customs ops.
D
Stay on CPUs, and increase the size of the cluster you're training your model on.