Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
You are tasked with designing a data pipeline in Azure Data Factory that extracts data from various sources and loads it into a data lake. How can you ensure that the pipeline is scalable and can handle increasing data volumes over time?
A
Design the pipeline with a fixed set of resources and assume that it will be sufficient for future data volumes.
B
Implement a pipeline that can dynamically scale based on the data volume and processing requirements.
C
Ignore the scalability of the pipeline, as it is not important for extracting and loading data into a data lake.
D
Focus only on optimizing the performance of the pipeline for the current data volumes, without considering future growth.