
Ultimate access to all questions.
Scenario: You rely on BigQuery as your primary analytics platform, with new data added daily and processed by an ETL pipeline for end users. The ETL pipeline undergoes frequent updates, and errors might remain undetected for up to two weeks. How should you structure your data in BigQuery and optimize your backups to efficiently recover from potential errors while minimizing storage costs?
A
Organize your data in separate tables for each month, and use snapshot decorators to restore the table to a time prior to the corruption.
B
Organize your data in separate tables for each month, and export, compress, and store the data in Cloud Storage.
C
Organize your data in a single table, export, and compress and store the BigQuery data in Cloud Storage.
D
Organize your data in separate tables for each month, and duplicate your data on a separate dataset in BigQuery.