
Answer-first summary for fast verification
Answer: Configure your model to use bfloat16 instead of float32.
The correct answer is D. Configuring your model to use bfloat16 instead of float32 allows you to benefit from the TPU's hardware acceleration capabilities, leading to faster computation and reduced memory usage without significant changes to your code. TPUs are optimized for operations with bfloat16 data types, which means you can improve hardware efficiency while maintaining model accuracy. Options A, B, and C involve changes that could affect the model's architecture, convergence, or accuracy more significantly.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are developing a machine learning model intended to classify whether X-ray images indicate bone fracture risk. The model employs a ResNet architecture and has been trained on Google's Vertex AI platform, utilizing a TPU as an accelerator. Despite this, you are unsatisfied with the training time and memory usage. You need to iterate on your training code quickly but prefer to make minimal changes to the existing codebase. Additionally, you aim to minimize any potential impact on the model’s final accuracy. Which of the following actions should you take?
A
Reduce the number of layers in the model architecture.
B
Reduce the global batch size from 1024 to 256.
C
Reduce the dimensions of the images used in the model.
D
Configure your model to use bfloat16 instead of float32.
No comments yet.