Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
To ensure updates to data transformation logic in your Azure Databricks notebooks do not introduce errors or regressions, which automated testing framework should you implement?
A
Setting up a continuous integration/continuous deployment (CI/CD) pipeline using GitHub Actions to run tests against a mirrored production environment
B
Integrating Databricks notebooks with Azure DevOps pipelines, using PyTest for regression testing with mock data sets
C
Utilizing Databricks Jobs with scheduled test runs, comparing outputs to expected results stored in Azure Blob Storage
D
Writing custom test scripts within Databricks notebooks that are manually executed before each deployment