
Answer-first summary for fast verification
Answer: Use Azure Databricks to read the data from the sources, perform schema mapping and transformation using its built-in functions, and write the results to the Delta Lake.
Option B is the correct approach as it leverages Azure Databricks for reading, transforming, and writing data to a Delta Lake. Azure Databricks provides built-in functions for schema mapping and transformation, allowing for efficient handling of data from multiple sources with different schemas. Options A, C, and D do not provide the same level of control and flexibility for schema mapping and transformation in the context of a Delta Lake.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are working on a batch processing solution that needs to handle data from multiple sources with different schemas. The solution should be able to read the data, transform it into a unified schema, and write it to a Delta Lake. How would you implement this functionality?
A
Use Azure Data Factory to orchestrate the data flow and use the Copy Data activity to read and transform the data.
B
Use Azure Databricks to read the data from the sources, perform schema mapping and transformation using its built-in functions, and write the results to the Delta Lake.
C
Use Azure SQL Data Warehouse to store the data and perform schema mapping and transformation using T-SQL.
D
Use Azure Cosmos DB to store the data and perform schema mapping and transformation using its built-in support for multiple data models.
No comments yet.