
Ultimate access to all questions.
As an ML engineer at your company, you are tasked with developing a model to identify your company’s products in images. You have access to over one million high-resolution product images stored in a Cloud Storage bucket. Your goal is to experiment with multiple TensorFlow models using Vertex AI Training. To ensure efficient training and minimize data I/O bottlenecks, you need a strategy to read images at scale during the training process. What should you do?
A
Load the images directly into the Vertex AI compute nodes by using Cloud Storage FUSE. Read the images by using the tf.data.Dataset.from_tensor_slices function.
B
Create a Vertex AI managed dataset from your image data. Access the AIP_TRAINING_DATA_URI environment variable to read the images by using the tf.data.Dataset.list_files function.
C
Convert the images to TFRecords and store them in a Cloud Storage bucket. Read the TFRecords by using the tf.data.TFRecordDataset function.
D
Store the URLs of the images in a CSV file. Read the file by using the tf.data.experimental.CsvDataset function.