
Answer-first summary for fast verification
Answer: Use Azure Data Factory to create a pipeline that moves data from the on-premises data lake to Azure Data Lake Storage Gen2.
The correct approach is to use Azure Data Factory to create a pipeline that moves data from the on-premises data lake to Azure Data Lake Storage Gen2. This will ensure that the data is available in the cloud for further processing. Option B is incorrect because Azure Databricks is used for processing data, not for moving data. Option C is incorrect because Azure Data Lake Storage Gen2 is a storage service and cannot be used as a source and destination for a data processing pipeline. Option D is incorrect because Azure Synapse Analytics is a data warehousing service and is not used for moving data.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
Your company is planning to migrate its data processing workloads to Azure. The data is currently stored in an on-premises data lake and needs to be processed in a batch manner. You are tasked with developing a batch processing solution using Azure Data Lake Storage Gen2. How would you approach this task?
A
Use Azure Data Factory to create a pipeline that moves data from the on-premises data lake to Azure Data Lake Storage Gen2.
B
Use Azure Databricks to read data from the on-premises data lake and write it to Azure Data Lake Storage Gen2.
C
Use Azure Data Lake Storage Gen2 as the source and destination for the data processing pipeline.
D
Use Azure Synapse Analytics to process the data stored in Azure Data Lake Storage Gen2.
No comments yet.