
Ultimate access to all questions.
In this scenario, you are frequently querying a massive table in BigQuery that spans several petabytes. The goal is to filter this data and generate simple aggregate reports to provide timely insights for downstream users. Given the size of the table, you need a solution that allows you to run these queries more efficiently and obtain the most current information rapidly. What steps should you take to achieve this?
A
Run a scheduled query to pull the necessary data at specific intervals daily.
B
Use a cached query to accelerate time to results.
C
Limit the query columns being pulled in the final result.
D
Create a materialized view based off of the query being run.