
Ultimate access to all questions.
In a scenario where you need to implement a data pipeline that processes data from a source system with a large number of small files, which of the following strategies would you use to optimize performance and minimize storage costs in Azure Data Factory?
A
Use the 'Copy Data' activity to individually copy each file from the source system to the destination.
B
Combine the small files into larger files using a custom script or application before loading them into the destination.
C
Use the 'Wildcard' path in the 'Copy Data' activity to copy all files from the source system to a staging area and then process them in the destination.
D
Enable the 'Enable staging' option in the 'Copy Data' activity to use a temporary storage for intermediate processing.