Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
What method enables a team of data engineers to reuse common data quality expectations across multiple tables in a Delta Live Tables (DLT) pipeline?
A
Add data quality constraints to tables in this pipeline using an external job with access to pipeline configuration files.
B
Use global Python variables to make expectations visible across DLT notebooks included in the same pipeline.
C
Maintain data quality rules in a separate Databricks notebook that each DLT notebook or file can import as a library.
D
Maintain data quality rules in a Delta table outside of this pipeline's target schema, providing the schema name as a pipeline parameter.