Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
When designing storage for very large text files in a Google Cloud data pipeline that requires ANSI SQL query support, compression, and parallel loading, which approach aligns with Google's best practices?
A
Utilize Cloud Dataflow to transform text files into compressed Avro format and store them in BigQuery for querying.
B
Employ Cloud Dataflow to convert text files into compressed Avro format, store them in Cloud Storage, and use BigQuery permanent linked tables for querying.
C
Compress text files to gzip format using Grid Computing Tools and store them in BigQuery for querying.
D
Compress text files to gzip format using Grid Computing Tools, store them in Cloud Storage, and then import into Cloud Bigtable for querying.