
Ultimate access to all questions.
Flowlogistic, a leading logistics and supply chain provider, is facing challenges with their current infrastructure's inability to support their proprietary real-time inventory-tracking system and the analysis of their orders and shipment logs. They aim to use Google BigQuery for analysis but still have Apache Hadoop and Spark workloads that cannot be moved to BigQuery. The question is, how should Flowlogistic store the data common to both workloads?
A
Store the common data in BigQuery as partitioned tables.
B
Store the common data in BigQuery and expose authorized views.
C
Store the common data encoded as Avro in Google Cloud Storage.
D
Store the common data in the HDFS storage for a Google Cloud Dataproc cluster.