
Ultimate access to all questions.
You are working on a large-scale machine learning project where you are using TensorFlow to train a model on a structured dataset containing 100 billion records. These records are currently stored in multiple CSV files. To optimize the input/output execution performance and ensure efficient data processing and training, what should you do?
A
Load the data into BigQuery, and read the data from BigQuery.
B
Load the data into Cloud Bigtable, and read the data from Bigtable.
C
Convert the CSV files into shards of TFRecords, and store the data in Cloud Storage.
D
Convert the CSV files into shards of TFRecords, and store the data in the Hadoop Distributed File System (HDFS).