
Ultimate access to all questions.
In the context of designing a scalable and efficient data pipeline in Azure Databricks that processes data from multiple sources with varying formats (e.g., JSON, CSV, Avro), which of the following approaches BEST leverages Delta Lake's capabilities for data format conversion, considering factors such as performance, cost, and ease of implementation? Choose the single best option.
A
Implementing custom application logic or utilizing third-party libraries for each data format to parse and transform the data before loading it into Delta Lake, ensuring precise control over the conversion process.
B
Utilizing Delta Lake's native support for reading and writing data in various formats, including JSON, CSV, and Avro, to directly handle data format conversion without additional processing layers.
C
Manually transforming the data using scripting languages like Python or R for each format outside of Delta Lake, then loading the transformed data into Delta Lake tables.
D
Employing an external data processing system, such as Apache Spark, to parse and transform the data into a uniform format before ingestion into Delta Lake, to offload the conversion workload.