
Ultimate access to all questions.
You have a BigQuery table and you run a query with a WHERE clause that filters the data based on a timestamp and ID column. However, after using the bq query “-dry_run“ command, you discover that the query triggers a full scan of the table, even though the filter only selects a small fraction of the data. You want to minimize the amount of data scanned by BigQuery while keeping your SQL queries intact. Which approach should you take?_
A
Use the LIMIT keyword to reduce the number of rows returned
B
Create a separate table for each ID
C
Recreate the table with a partitioning column and clustering column
D
Use the bq query --maximum_bytes_billed flag to restrict the number of bytes billed