
Ultimate access to all questions.
You are tasked with developing a batch processing solution for a financial services company that needs to process large volumes of trade data daily. The solution should be able to handle upserts, revert data to a previous state, and configure exception handling. Additionally, the company wants to integrate Jupyter or Python notebooks into the data pipeline for data exploration and analysis. How would you approach this task?
A
Use Azure Data Factory to orchestrate the data pipeline and leverage Azure Data Lake Storage Gen2 for storing the raw data.
B
Use Azure Databricks to process the data using its built-in support for Delta Lake, and integrate Jupyter notebooks for data exploration and analysis.
C
Use Azure Stream Analytics to process the data in real-time and store the results in Azure Cosmos DB.
D
Use Azure Functions to process the data in small batches and store the results in Azure SQL Database.