
Ultimate access to all questions.
A data engineer has three notebooks in an ELT pipeline. The notebooks need to be executed in a specific order for the pipeline to complete successfully. The data engineer would like to use Delta Live Tables to manage this process.
Which of the following steps must the data engineer take as part of implementing this pipeline using Delta Live Tables?
A
They need to create a Delta Live Tables pipeline from the Data page.
B
They need to create a Delta Live Tables pipeline from the Jobs page.
C
They need to create a Delta Live tables pipeline from the Compute page.
D
They need to refactor their notebook to use Python and the dlt library.
E
They need to refactor their notebook to use SQL and CREATE LIVE TABLE keyword.
Explanation:
When implementing an ELT pipeline with Delta Live Tables (DLT) that involves multiple notebooks that need to be executed in a specific order, the correct approach is:
B. They need to create a Delta Live Tables pipeline from the Jobs page.
dlt library, you can also use SQL. The key requirement is creating the pipeline from the Jobs page.CREATE LIVE TABLE keyword. ❌ Not necessarily required - While SQL with CREATE LIVE TABLE is one approach, Python with the dlt decorator is also valid. The fundamental requirement is creating the DLT pipeline from the Jobs page.Delta Live Tables pipelines are created and managed through the Jobs page in Databricks, where you can configure notebook dependencies and execution order for your ELT workflows.