
Answer-first summary for fast verification
Answer: None of these changes will need to be made
## Explanation Delta Live Tables (DLT) is designed to be flexible and support existing pipeline architectures. Let's analyze each option: **A. None of these changes will need to be made** - **CORRECT** - DLT supports the medallion architecture (bronze, silver, gold layers) - DLT supports both Python and SQL for different parts of the pipeline - DLT supports streaming sources natively - DLT allows mixing Python and SQL in the same pipeline **B. The pipeline will need to stop using the medallion-based multi-hop architecture** - **INCORRECT** - DLT actually encourages and supports the medallion architecture - The bronze/silver/gold pattern is a recommended best practice in DLT **C. The pipeline will need to be written entirely in SQL** - **INCORRECT** - DLT supports both Python and SQL - You can mix Python and SQL in the same DLT pipeline - The data engineer can continue using Python for bronze/silver layers - The data analyst can continue using SQL for gold layer **D. The pipeline will need to use a batch source in place of a streaming source** - **INCORRECT** - DLT supports both streaming and batch processing - Streaming sources are fully supported in DLT - You can define streaming tables in DLT **E. The pipeline will need to be written entirely in Python** - **INCORRECT** - DLT supports both Python and SQL - The data analyst can continue using SQL for the gold layer **Key DLT Features:** 1. **Language Flexibility**: Supports both Python and SQL 2. **Architecture Support**: Maintains medallion architecture 3. **Processing Modes**: Supports both streaming and batch 4. **Incremental Processing**: Automatically handles incremental updates 5. **Data Quality**: Built-in expectations and data quality monitoring The existing pipeline structure is perfectly compatible with DLT without requiring any of the listed changes.
Author: Keng Suppaseth
Ultimate access to all questions.
A data engineer and data analyst are working together on a data pipeline. The data engineer is working on the raw, bronze, and silver layers of the pipeline using Python, and the data analyst is working on the gold layer of the pipeline using SQL. The raw source of the pipeline is a streaming input. They now want to migrate their pipeline to use Delta Live Tables.
Which of the following changes will need to be made to the pipeline when migrating to Delta Live Tables?
A
None of these changes will need to be made
B
The pipeline will need to stop using the medallion-based multi-hop architecture
C
The pipeline will need to be written entirely in SQL
D
The pipeline will need to use a batch source in place of a streaming source
E
The pipeline will need to be written entirely in Python
No comments yet.