Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In a scenario where you need to implement a data pipeline that processes large volumes of data in Azure Data Factory, which of the following strategies would you use to optimize performance and minimize costs?
A
Increase the number of parallel activities to maximize resource utilization.
B
Use Azure Data Factory's autoscaling feature to dynamically allocate resources based on workload.
C
Manually adjust the degree of parallelism for each activity based on the size of the input data.
D
Disable all monitoring and logging to reduce overhead.