
Answer-first summary for fast verification
Answer: Monitor Spark jobs using Azure Monitor and set up alerts for failures or performance issues.
Option B is the correct approach as it leverages Azure Monitor to monitor Spark jobs and set up alerts for failures or performance issues. This ensures that you can quickly identify and address any problems with the Spark jobs. Option A is not recommended as running jobs in parallel without coordination can lead to resource contention and suboptimal performance. Option C is not efficient as it does not consider the resource requirements of different jobs, while Option D is not practical as it requires manual intervention and does not leverage the capabilities of Azure Data Factory.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
In a scenario where you need to manage Spark jobs within an Azure Data Factory pipeline, which of the following best practices should you follow to ensure efficient execution and monitoring of the Spark jobs?
A
Run Spark jobs in parallel without any coordination.
B
Monitor Spark jobs using Azure Monitor and set up alerts for failures or performance issues.
C
Use the same Spark cluster for all jobs without considering resource requirements.
D
Manually start and monitor Spark jobs outside of Azure Data Factory.
No comments yet.