
Ultimate access to all questions.
You're working with a BigQuery table and execute a query that includes a WHERE clause filtering data by a timestamp and an ID column. Despite the filter targeting a small subset of data, the bq query --dry_run command reveals the query performs a full table scan. Your goal is to minimize the data scanned by BigQuery without modifying your SQL queries. What's the best approach?_
A
Use the LIMIT keyword to decrease the number of rows returned
B
Create individual tables for each ID
C
Reconstruct the table with partitioning and clustering columns
D
Apply the bq query --maximum_bytes_billed flag to limit the bytes billed