Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In the context of financial services, ensuring strict data lineage and auditability for all processed data within your Databricks lakehouse is crucial. Which combination of Databricks tools and features would best meet these requirements?
A
Implement custom logging within Spark jobs to track data modifications manually.
B
Rely solely on Azure Purview for managing data lineage without integrating with Databricks features.
C
Use Delta Lake for ACID transactions and time travel features combined with Databricks Unity Catalog for data governance and lineage.
D
Utilize MLflow for tracking model versions and manual tagging for dataset versions to simulate data lineage.