
Answer-first summary for fast verification
Answer: Cloud Storage using a scheduled task and gsutil
The question emphasizes migrating data backup and disaster recovery solutions to GCP for later analysis, with requirements for scalability and cost-efficiency. Cloud Storage (option B) is optimal because it is designed for durable, low-cost object storage, ideal for backups and DR. It supports scheduled tasks with gsutil for automated transfers and offers storage classes like Archive for minimal costs when data isn't immediately needed. BigQuery (option A) is less suitable as it's a data warehouse optimized for querying, not backup/DR, and incurs higher costs for storage and continuous updates. Compute Engine with Persistent Disk (option C) is inefficient for backup/DR due to VM overhead and lack of scalability. Cloud Datastore (option D) is a NoSQL database for transactional workloads, not cost-effective for bulk backups. Community discussion strongly favors B (80% consensus), citing cost efficiency and alignment with GCP's DR guidance, while noting that data can be moved to BigQuery later if analysis is required.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
An organization is migrating its infrastructure from on-premises to Google Cloud Platform (GCP), starting with its data backup and disaster recovery solutions. The migrated data will be used for later analysis. The production environment will stay on-premises indefinitely. The solution must be scalable and cost-efficient.
Which GCP solution should the organization use?
A
BigQuery using a data pipeline job with continuous updates
B
Cloud Storage using a scheduled task and gsutil
C
Compute Engine Virtual Machines using Persistent Disk
D
Cloud Datastore using regularly scheduled batch upload jobs
No comments yet.