
Answer-first summary for fast verification
Answer: Maintain data quality rules in a separate Databricks notebook that each DLT notebook or file can import as a library.
To reuse data quality rules across multiple tables in a DLT pipeline, maintaining the rules in a separate notebook (Option C) allows centralized management. Each DLT notebook or file can import this common notebook as a library, enabling consistent application of expectations without duplication. Global variables (Option B) are not shareable across notebooks. External jobs (Option A) cannot dynamically modify pipeline constraints. Storing rules in a Delta table (Option D) is impractical because DLT expectations are defined statically during pipeline setup, not dynamically at runtime.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
What method enables a team of data engineers to reuse common data quality expectations across multiple tables in a Delta Live Tables (DLT) pipeline?
A
Add data quality constraints to tables in this pipeline using an external job with access to pipeline configuration files.
B
Use global Python variables to make expectations visible across DLT notebooks included in the same pipeline.
C
Maintain data quality rules in a separate Databricks notebook that each DLT notebook or file can import as a library.
D
Maintain data quality rules in a Delta table outside of this pipeline's target schema, providing the schema name as a pipeline parameter.