
Ultimate access to all questions.
Your company is currently facing an issue where its on-premises Apache Hadoop servers are nearing their end-of-life. As part of the IT strategy, there is a decision to migrate these servers to Google Cloud Dataproc. For a like-for-like migration of the existing cluster, each node would require 50 TB of Google Persistent Disk storage. The Chief Information Officer (CIO) is worried about the significant costs that would be incurred by using such a large amount of block storage. Your objective is to determine how to minimize the storage costs associated with this migration. What steps should you take?
A
Put the data into Google Cloud Storage.
B
Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
C
Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
D
Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.