Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
How would you conduct load testing for a streaming data pipeline built with Azure Databricks processing data from Event Hubs to ensure it can handle peak data volumes?
A
Deploy a custom data generator in AKS to produce data streams, utilizing Azure Monitor for performance metrics.
B
Manually insert data into Event Hubs and observe the processing in Databricks, adjusting volumes to simulate peak loads.
C
Configure Azure Load Testing service to mimic real-time data ingestion patterns, analyzing performance in Databricks.
D
Use Apache JMeter to simulate Event Hubs data streams and monitor pipeline metrics in Databricks.