
Answer-first summary for fast verification
Answer: Use Azure Data Factory to orchestrate the data pipeline and leverage Azure Databricks for processing and analysis.
Option A is the most suitable approach as it leverages Azure Data Factory for orchestration and Azure Databricks for processing and analysis. Azure Data Factory can handle the orchestration of the data pipeline, while Azure Databricks can process large volumes of data efficiently. Additionally, Jupyter or Python notebooks can be integrated into the data pipeline for data exploration and analysis. Options B, C, and D do not fully address the requirements of the task.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are tasked with developing a batch processing solution for a large e-commerce company that needs to process millions of transactions daily. The solution should be able to handle upserts, revert data to a previous state, and configure exception handling. Additionally, the company wants to integrate Jupyter or Python notebooks into the data pipeline for data exploration and analysis. How would you approach this task?
A
Use Azure Data Factory to orchestrate the data pipeline and leverage Azure Databricks for processing and analysis.
B
Use Azure Stream Analytics to process the data in real-time and store the results in Azure SQL Database.
C
Use Azure Data Lake Storage Gen2 to store the raw data and process it using Azure Databricks.
D
Use Azure Functions to process the data in small batches and store the results in Azure Cosmos DB.
No comments yet.