
Ultimate access to all questions.
You are leveraging Bigtable to support a real-time application characterized by a substantial mixture of read and write operations. Recently, an additional requirement has emerged necessitating the execution of an analytical job every hour to compute specific statistics for the entire database. It is imperative to maintain the reliability and performance of both the production application and the newly introduced analytical workload. What is the optimal approach to achieve this?
A
Export Bigtable dump to GCS and run your analytical job on top of the exported files.
B
Add a second cluster to an existing instance with a multi-cluster routing, use live-traffic app profile for your regular workload and batch-analytics profile for the analytics workload.
C
Add a second cluster to an existing instance with a single-cluster routing, use live-traffic app profile for your regular workload and batch-analytics profile for the analytics workload.
D
Increase the size of your existing cluster twice and execute your analytics workload on your new resized cluster.