Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
After deploying a new data pipeline in Azure Databricks intended to manage a substantial rise in data volume, what approach would you take to verify its scalability and performance under anticipated future data loads?
A
Gradually increase the data volume processed by the pipeline in stages, employing Azure Monitor to observe performance metrics.
B
Use historical data alongside predictive analytics in Databricks to forecast future data volumes and evaluate the pipeline's scalability with simulated data.
C
Perform a manual test run of the pipeline with a sample dataset adjusted to the expected future volume, noting resource usage and throughput.
D
Apply Azure Load Testing to mimic higher data volumes, overseeing the pipeline's performance and dynamically adjusting resources as required.