
Ultimate access to all questions.
A data engineering team is developing a Delta Live Tables (DLT) pipeline containing several tables that require identical data quality checks. To improve maintainability and reduce redundancy, they want to reuse these data quality rules across all tables. What is the recommended approach for implementing reusable expectations in DLT?
A
Define the data quality rules in a centralized Databricks notebook or Python file and import them as a library within each DLT notebook.
B
Persist the data quality rules in a Delta table outside the pipeline's target schema and retrieve them by passing the schema name as a pipeline parameter.
C
Define global Python variables within one DLT notebook and rely on the execution context to share them across all other notebooks in the pipeline.
D
Implement an external job that programmatically modifies the pipeline's JSON configuration files to inject data quality constraints into the table definitions.