
Answer-first summary for fast verification
Answer: Convert the images into TFRecords, store the images in Cloud Storage, and then use the tf.data API to read the images for training.
The correct answer is D. When the input data does not fit into memory, Google's recommended best practice is to convert the images into TFRecords, store them in Cloud Storage, and then use the tf.data API to read the images for training. This approach allows for efficient storage and fast I/O operations. Options A, B, and C are less suitable as they assume that the data can fit into memory or do not address the issue of storing large datasets efficiently.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are tasked with developing an efficient input pipeline for an ML training model that needs to process images from various sources quickly. The images are too large to fit into memory at once. Considering Google's recommended best practices, how should you go about creating this dataset?
A
Create a tf.data.Dataset.prefetch transformation.
B
Convert the images to tf.Tensor objects, and then run Dataset.from_tensor_slices().
C
Convert the images to tf.Tensor objects, and then run tf.data.Dataset.from_tensors().
D
Convert the images into TFRecords, store the images in Cloud Storage, and then use the tf.data API to read the images for training.
No comments yet.