Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
You're tasked with setting up a real-time data ingestion pipeline into Databricks from IoT devices, requiring efficient handling of high-velocity and high-volume data. Which approach ensures scalability and reliability of the data ingestion layer?
A
Deploy individual Databricks jobs for each IoT device to ensure dedicated resources for ingestion.
B
Batch ingest data at hourly intervals to reduce the load on the Databricks clusters.
C
Directly stream IoT data into Delta tables without any intermediate processing to minimize latency.
D
Use Azure Event Hubs as a front buffer to ingest data streams, then process and store the data in Delta Lake using Databricks Structured Streaming.