
Ultimate access to all questions.
You are tasked with developing an efficient input pipeline for an ML training model that needs to process images from various sources quickly. The images are too large to fit into memory at once. Considering Google's recommended best practices, how should you go about creating this dataset?
A
Create a tf.data.Dataset.prefetch transformation.
B
Convert the images to tf.Tensor objects, and then run Dataset.from_tensor_slices().
C
Convert the images to tf.Tensor objects, and then run tf.data.Dataset.from_tensors()._
D
Convert the images into TFRecords, store the images in Cloud Storage, and then use the tf.data API to read the images for training.