Ultimate access to all questions.
You are leveraging Bigtable to support a real-time application characterized by a substantial mixture of read and write operations. Recently, an additional requirement has emerged necessitating the execution of an analytical job every hour to compute specific statistics for the entire database. It is imperative to maintain the reliability and performance of both the production application and the newly introduced analytical workload. What is the optimal approach to achieve this?