Ultimate access to all questions.
Scenario: You rely on BigQuery as your primary analytics platform, with new data added daily and processed by an ETL pipeline for end users. The ETL pipeline undergoes frequent updates, and errors might remain undetected for up to two weeks. How should you structure your data in BigQuery and optimize your backups to efficiently recover from potential errors while minimizing storage costs?