Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
To ensure the reliability and stability of a complex ETL pipeline in Azure Databricks, what strategy would you adopt for automated regression testing to detect any breaks or performance regressions automatically?
A
Utilizing Azure Monitor alerts to notify developers of any performance degradation post-deployment, without conducting pre-deployment testing
B
Creating a comprehensive set of test cases that automatically execute post-deployment via Azure Pipelines, comparing actual outputs with expected results
C
Depending entirely on manual testing by developers prior to each deployment to identify any regressions
D
Deploying a custom logging mechanism to monitor performance trends over time, with regressions identified through manual analysis of logs