
Answer-first summary for fast verification
Answer: Create a single job with three tasks, configure task dependencies in the job settings to ensure sequential execution.
Configuring a multi-task job in Databricks allows for precise control over task dependencies and ensures that each task is executed in the correct order, leveraging the built-in features of the Databricks Jobs UI.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
Design a multi-task job in Databricks that involves three tasks: data ingestion, data transformation, and model training. Each task should have specific dependencies on the completion of the previous tasks. Describe how you would configure this job in Databricks.
A
Create three separate jobs, manually trigger each job based on the completion of the previous one.
B
Use the Databricks Jobs API to chain the tasks together, ensuring each task starts only after the previous one completes.
C
Schedule all three tasks to run simultaneously, handle dependencies within the code of each task.
D
Create a single job with three tasks, configure task dependencies in the job settings to ensure sequential execution.
No comments yet.