
Ultimate access to all questions.
In the context of designing a Data Lake and Analytics (DLT) pipeline on Azure, consider a scenario where you need to ensure efficient data flow management, error handling, and task coordination across multiple processing steps. The pipeline must also comply with strict data governance policies and scale dynamically based on workload. Given these requirements, what is the primary purpose of the control mechanism in the pipeline? (Choose one correct option)
A
To act as the primary storage for raw, unprocessed data, ensuring data is always available for processing.
B
To serve as the final repository for processed data, making it readily available for analytics and reporting.
C
To directly process and transform data from its raw form into a structured format suitable for analysis.
D
To orchestrate and manage the flow of data and tasks within the pipeline, ensuring correct processing order, handling errors, and managing dependencies between steps.