
Answer-first summary for fast verification
Answer: Use external event management systems like Apache Kafka to aggregate events, invoking the DLT pipeline via the Databricks REST API when conditions are met.
Using external event management systems like Apache Kafka to aggregate events and trigger the DLT pipeline via the Databricks REST API when conditions are met is the most suitable approach for integrating complex event triggers in DLT pipelines. This method offers real-time event processing, scalability, reliability, flexibility, extensibility, and automation, making it a robust solution for handling complex event conditions from multiple data sources.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
For a DLT pipeline that needs to be triggered based on complex event conditions from multiple data sources, which method best supports the dynamic initiation of pipeline runs?
A
Rely on manual initiation of pipeline runs based on reports from data source administrators.
B
Configure the DLT pipeline to poll data sources at regular intervals, triggering runs based on data presence.
C
Use external event management systems like Apache Kafka to aggregate events, invoking the DLT pipeline via the Databricks REST API when conditions are met.
D
Implement a custom Spark Structured Streaming application that listens for specific events and triggers DLT pipeline runs via REST API calls.
No comments yet.