
Answer-first summary for fast verification
Answer: Configure the job to run on Dataproc Serverless.
Dataproc Serverless will automatically provision the resources to run your Dataproc jobs.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You manage a PySpark batch data pipeline by using Dataproc: You want to take a hands-off approach to running the workload, and you do not want to provision and manage your own cluster. What should you do?
A
Rewrite the job in Dataflow with SQL.
B
Configure the job to run with Spot VMs.
C
Configure the job to run on Dataproc Serverless.
D
Rewrite the job in Spark SQL.
No comments yet.