
Answer-first summary for fast verification
Answer: None of these changes will need to be made
## Explanation Delta Live Tables (DLT) is designed to be flexible and support existing pipeline architectures. Let's analyze each option: **A. None of these changes will need to be made** - **CORRECT** - DLT supports both Python and SQL, so the mixed-language approach can continue - DLT supports the medallion architecture (bronze, silver, gold layers) - DLT supports streaming sources - No requirement to rewrite everything in one language **B. The pipeline will need to stop using the medallion-based multi-hop architecture** - **INCORRECT** - DLT actually encourages and supports the medallion architecture - You can define bronze (raw), silver (cleaned), and gold (aggregated) tables within DLT **C. The pipeline will need to be written entirely in SQL** - **INCORRECT** - DLT supports both Python and SQL - You can mix Python and SQL in the same pipeline - The data engineer can continue using Python for bronze/silver, analyst can use SQL for gold **D. The pipeline will need to use a batch source in place of a streaming source** - **INCORRECT** - DLT supports both streaming and batch processing - You can use streaming sources with DLT - The `STREAMING` keyword can be used for streaming tables **E. The pipeline will need to be written entirely in Python** - **INCORRECT** - DLT supports both languages, no requirement to use only Python **Key Points:** - DLT is designed to work with existing Databricks workflows - It supports the medallion architecture - It supports both Python and SQL - It supports both streaming and batch processing - The main change would be adding DLT-specific keywords (like `LIVE`, `STREAMING`) if not already present, but this isn't listed in the options **Note:** The provided answer in the text states "Ans should be A" and includes additional context about potentially needing to add `STREAMING` or `LIVE` keywords, which confirms option A is correct.
Author: Keng Suppaseth
Ultimate access to all questions.
No comments yet.
A data engineer and data analyst are working together on a data pipeline. The data engineer is working on the raw, bronze, and silver layers of the pipeline using Python, and the data analyst is working on the gold layer of the pipeline using SQL. The raw source of the pipeline is a streaming input. They now want to migrate their pipeline to use Delta Live Tables.
Which of the following changes will need to be made to the pipeline when migrating to Delta Live Tables?
A
None of these changes will need to be made
B
The pipeline will need to stop using the medallion-based multi-hop architecture
C
The pipeline will need to be written entirely in SQL
D
The pipeline will need to use a batch source in place of a streaming source
E
The pipeline will need to be written entirely in Python