
Ultimate access to all questions.
You currently manage thousands of Apache Spark jobs on your on-premises Apache Hadoop cluster. To streamline your operations and reduce maintenance overhead, you want to migrate these jobs to Google Cloud. Your goal is to leverage managed services on Google Cloud to execute your Spark jobs, rather than sustaining a long-term Hadoop cluster on-premises. Given your tight schedule, it is crucial to minimize the required code changes during the migration process. What steps should you take to achieve this?
A
Move your data to BigQuery. Convert your Spark scripts to a SQL-based processing approach.
B
Rewrite your jobs in Apache Beam. Run your jobs in Dataflow.
C
Copy your data to Compute Engine disks. Manage and run your jobs directly on those instances.
D
Move your data to Cloud Storage. Run your jobs on Dataproc.