Ultimate access to all questions.
A data engineer is refactoring DLT code that contains multiple table definitions with similar patterns:
@dlt.table(name="t1_dataset")
def t1_dataset():
return spark.read.table("t1")
@dlt.table(name="t2_dataset")
def t2_dataset():
return spark.read.table("t2")
@dlt.table(name="t3_dataset")
def t3_dataset():
return spark.read.table("t3")
They attempt to parameterize the table creation using this loop:
tables = ["t1", "t2", "t3"]
for t in tables:
@dlt.table(name=f"{t}_dataset")
def new_table():
return spark.read.table(t)
After running the pipeline with this refactored code, the DAG displays incorrect configuration values for these tables. What should the data engineer do to correct this?