Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
Your company's Data Science team is developing a Dataflow job on Google Cloud to process large volumes of unstructured data in various file formats using the ETL process. What is the best approach to make this data accessible for the Dataflow job?
A
Load the data into Cloud SQL using the import feature in the Google Cloud console.
B
Transfer the data to BigQuery using the bq command line utility.
C
Store the data in Cloud Storage by employing the gcloud storage command.
D
Ingest the data into Cloud Spanner via the import capability in the Google Cloud console.