Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In a scenario where you need to manage Spark jobs within an Azure Data Factory pipeline, which of the following best practices should you follow to ensure efficient execution and monitoring of the Spark jobs?
A
Run Spark jobs in parallel without any coordination.
B
Monitor Spark jobs using Azure Monitor and set up alerts for failures or performance issues.
C
Use the same Spark cluster for all jobs without considering resource requirements.
D
Manually start and monitor Spark jobs outside of Azure Data Factory.