Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
How can you ensure data consistency and integrity across datasets distributed in multiple Azure Databricks workspaces and regions during deployment?
A
Configuring Azure Logic Apps to automate data consistency validations and integrate with Azure Monitor for alerts on inconsistencies
B
Using Azure Data Factory to orchestrate data movement and employing its data flow debug features for consistency checks
C
Leveraging Delta Lake‘s ACID transaction capabilities within Databricks for cross-workspace and region consistency validation
D
Implementing custom Spark jobs to periodically compare datasets across regions and workspaces, alerting on discrepancies