
Ultimate access to all questions.
To manage your centralized analytics platform using BigQuery, where new data is loaded every day and an ETL pipeline processes this data for end users, you need a strategy to handle potential errors in the ETL process. Given that this ETL pipeline is frequently updated and errors may not be identified until up to two weeks later, it's crucial to have a mechanism for error recovery. Additionally, it is important to optimize your backups for storage costs. How should you structure your data in BigQuery and manage your backup storage?
A
Organize your data in a single table, export, and compress and store the BigQuery data in Cloud Storage.
B
Organize your data in separate tables for each month, and export, compress, and store the data in Cloud Storage.
C
Organize your data in separate tables for each month, and duplicate your data on a separate dataset in BigQuery.
D
Organize your data in separate tables for each month, and use snapshot decorators to restore the table to a time prior to the corruption.