Google Professional Data Engineer

Google Professional Data Engineer

Get started today

Ultimate access to all questions.


Flowlogistic, a logistics company, aims to use Google BigQuery as their primary analysis platform. However, they are unable to migrate some of their existing Apache Hadoop and Spark workloads to BigQuery. They need a solution for storing data that needs to be accessed by both BigQuery and their Hadoop/Spark workloads. What should they do?

A. Store the common data in BigQuery as partitioned tables. B. Store the common data in BigQuery and expose authorized views. C. Store the common data encoded as Avro in Google Cloud Storage. D. Store the common data in the HDFS storage for a Google Cloud Dataproc cluster.