
Answer-first summary for fast verification
Answer: Store your data in Bigtable. Concatenate the sensor ID and timestamp and use it as the row key. Perform an export to BigQuery every day.
Bigtable excels at providing incredibly fast lookups by row key, often reaching single-digit millisecond latencies. Constructing the row key with sensor ID and timestamp enables efficient retrieval of specific sensor readings at exact timestamps. Bigtable's wide-column design effectively stores time series data, allowing for flexible addition of new metrics without schema changes. Additionally, Bigtable scales horizontally to accommodate massive datasets, easily handling the expected data growth. The daily export to BigQuery allows for running complex analytic queries on the data, including joins. Therefore, the correct answer is B.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
Imagine you are managing a network of 1000 sensors, each generating time series data at the rate of one metric per sensor per second, accompanied by a timestamp. Currently, you have accumulated 1 TB of data and anticipate an increase of 1 GB daily. You require two distinct access patterns for this data:
What would be the optimal way to store this data to meet these requirements?
A
Store your data in BigQuery. Concatenate the sensor ID and timestamp, and use it as the primary key.
B
Store your data in Bigtable. Concatenate the sensor ID and timestamp and use it as the row key. Perform an export to BigQuery every day.
C
Store your data in Bigtable. Concatenate the sensor ID and metric, and use it as the row key. Perform an export to BigQuery every day.
D
Store your data in BigQuery. Use the metric as a primary key.
No comments yet.