Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
How can you test and ensure fault tolerance and data recovery in a complex data workflow involving multiple Azure services like Data Lake Storage, Databricks, and Synapse Analytics, especially in cases of partial failures?
A
Relying on Azure‘s built-in service redundancy and recovery features, assuming automatic fault tolerance without specific testing.
B
Implementing manual switchovers to backup data pipelines and storage accounts in the event of a failure detected by monitoring alerts.
C
Creating a custom simulation of partial failures (like network outages or service disruptions) in non-production environments and testing the workflow‘s response using Azure Chaos Studio.
D
Using Databricks Delta Lake‘s transaction log features to rollback changes in case of failures, combined with Azure Site Recovery for service disruptions.