
Ultimate access to all questions.
A data science team aims to construct a machine learning pipeline where clickstream and orders data will be ingested using Delta Live Tables (DLT). Following this, a Notebook containing Python logic needs to be executed for data transformation, feature extraction, and then training and testing the predictive model. They wish to orchestrate this entire process using Databricks Workflows. Which of the following methods can the team employ to achieve this goal?
A
Establish two separate jobs: run the DLT Pipeline job in triggered mode and, upon its success, manually initiate the Notebook job.
B
Utilize a single job to configure both a DLT Pipeline task and a Notebook task, employing two distinct tasks within the same job with a linear dependency.
C
Since a single job cannot include both a DLT Pipeline task and a Notebook task, use two separate jobs with a linear dependency.
D
Employ a single job to set up both a DLT Pipeline task and a Notebook task, running two different tasks in the same job in parallel.
E
There is no feasible method to orchestrate such a workflow in Databricks.