
Ultimate access to all questions.
You oversee a BigQuery data pipeline for your analytics platform, with daily data loads and transformations via an ETL pipeline. This pipeline undergoes frequent updates, potentially introducing errors that might go unnoticed for up to two weeks. Your goal is to ensure error recovery capability while optimizing backup storage costs. What strategy should you adopt?
A
Generate monthly tables and duplicate the data into another BigQuery dataset
B
Generate monthly tables, then export, compress, and archive the data in Cloud Storage
C
Generate monthly tables and utilize snapshot decorators to revert a table to a previous version prior to the error
D
Consolidate all data into a single table, then export and compress it into Cloud Storage