
Ultimate access to all questions.
You need to store time series data for CPU and memory usage from millions of computers, with data recorded every second. Analysts will perform real-time, ad hoc analytics on this data. The solution must avoid per-query charges and scale with the dataset. Which database and data model is best suited for this scenario?
A
Design a wide table in Bigtable, using a row key that merges the computer identifier with the sample time each minute, and include second-level data as columns.
B
Set up a table in BigQuery, continuously adding new CPU and memory usage samples to it.
C
Implement a narrow table in Bigtable, with a row key that combines the Computer Engine computer identifier and the sample time each second.
D
Construct a wide table in BigQuery, with a column for each second's sample value, updating the row with each second's interval.