Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In a data pipeline that involves processing large volumes of data, you need to split the data into smaller chunks for more efficient processing. Which Azure service and feature would you use to achieve this?
A
Azure Data Factory with the 'ForEach' activity
B
Azure Data Lake Storage Gen2 with hierarchical namespaces
C
Azure Databricks with the 'partitionBy' function in Spark SQL
D
Azure Stream Analytics with the 'Windowing' feature