You manage a PySpark batch data pipeline by using Dataproc: You want to take a hands-off approach to running the workload, and you do not want to provision and manage your own cluster. What should you do?
Exam-Like
Explanation:
Dataproc Serverless will automatically provision the resources to run your Dataproc jobs.