
Answer-first summary for fast verification
Answer: Use Azure Databricks to read data from Azure Blob Storage and process it using Spark.
The correct approach is to use Azure Databricks to read data from Azure Blob Storage and process it using Spark. This allows for efficient processing of large amounts of data using the power of Spark. Option A is incorrect because Azure Data Factory is used for moving data between different storage services, not for processing data. Option C is incorrect because Azure Data Lake Storage Gen2 is a storage service and is not used for processing data. Option D is incorrect because Azure Synapse Analytics is a data warehousing service and is not used for processing data stored in Azure Blob Storage.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You are tasked with developing a batch processing solution using Azure Databricks. Your company has a large amount of data stored in Azure Blob Storage that needs to be processed. How would you approach this task?
A
Use Azure Data Factory to create a pipeline that moves data from Azure Blob Storage to Azure Databricks.
B
Use Azure Databricks to read data from Azure Blob Storage and process it using Spark.
C
Use Azure Data Lake Storage Gen2 as the storage service for the processed data.
D
Use Azure Synapse Analytics to process the data stored in Azure Blob Storage.