
Ultimate access to all questions.
You oversee a BigQuery data pipeline for an analytics platform, with daily data loads and transformations via an ETL pipeline. This pipeline undergoes frequent updates, sometimes introducing errors that may go unnoticed for up to two weeks. Your goal is to implement a strategy that allows for error recovery while optimizing backup storage costs. What is the best approach?
A
Consolidate all data into a single table, then export and compress it for storage in Cloud Storage
B
Generate monthly tables, export and compress them, then store in Cloud Storage
C
Generate monthly tables and replicate the data in a separate BigQuery dataset
D
Generate monthly tables and utilize snapshot decorators to revert a table to a previous version before the error