
Ultimate access to all questions.
Your company is running its first dynamic marketing campaign, aiming to serve varied offers by analyzing real-time data throughout the holiday season. During this 30-day campaign, data scientists are accumulating terabytes of data, which rapidly increases in volume every hour. They utilize Google Cloud Dataflow to preprocess the incoming data and store the necessary feature (signals) data for their machine learning model in Google Cloud Bigtable. The team has observed suboptimal performance regarding reads and writes with their initial load of 10 TB of data. They seek to enhance this performance while minimizing costs. What actions should they take?
A
Redefine the schema by evenly distributing reads and writes across the row space of the table.
B
The performance issue should be resolved over time as the site of the BigDate cluster is increased.
C
Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.
D
Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.