
Answer-first summary for fast verification
Answer: Use Azure Databricks to process the data using its built-in support for Delta Lake, and integrate Jupyter notebooks for data exploration and analysis, while implementing data privacy and security measures.
Option B is the most suitable approach as it leverages Azure Databricks for processing and analysis, integrates Jupyter notebooks for data exploration, and implements data privacy and security measures. Azure Databricks provides built-in support for Delta Lake, enabling efficient upserts and data versioning. Additionally, Jupyter notebooks can be integrated into the data pipeline for further analysis, and data privacy and security can be ensured through appropriate measures.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are tasked with developing a batch processing solution for a healthcare organization that needs to process patient data daily. The solution should ensure data privacy and security, handle upserts, and configure exception handling. Additionally, the organization wants to integrate Jupyter or Python notebooks into the data pipeline for data exploration and analysis. How would you approach this task?
A
Use Azure Data Factory to orchestrate the data pipeline and leverage Azure Data Lake Storage Gen2 for storing the raw data, ensuring data privacy and security through encryption and access control.
B
Use Azure Databricks to process the data using its built-in support for Delta Lake, and integrate Jupyter notebooks for data exploration and analysis, while implementing data privacy and security measures.
C
Use Azure Stream Analytics to process the data in real-time and store the results in Azure SQL Database, ensuring data privacy and security through built-in features.
D
Use Azure Functions to process the data in small batches and store the results in Azure Cosmos DB, implementing data privacy and security through custom code.
No comments yet.