Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
For a DLT pipeline that needs to be triggered based on complex event conditions from multiple data sources, which method best supports the dynamic initiation of pipeline runs?
A
Rely on manual initiation of pipeline runs based on reports from data source administrators.
B
Configure the DLT pipeline to poll data sources at regular intervals, triggering runs based on data presence.
C
Use external event management systems like Apache Kafka to aggregate events, invoking the DLT pipeline via the Databricks REST API when conditions are met.
D
Implement a custom Spark Structured Streaming application that listens for specific events and triggers DLT pipeline runs via REST API calls.