
Ultimate access to all questions.
A data engineering team needs to address failures within a multi-task Databricks Jobs workflow. To optimize resource utilization and time, they must repair failed tasks while ensuring minimal recomputation of tasks that have already completed successfully. What is the most efficient approach?
A
Restart the compute cluster and manually trigger a full rerun of the entire workflow.
B
Use the 'Repair and rerun' feature in the Databricks Jobs UI to execute only the failed tasks and their dependencies.
C
Programmatically create a new temporary workflow designed specifically to handle the logic of the failed tasks.
D
Clone the existing job definition and execute the new job from the beginning.