
Ultimate access to all questions.
As a Microsoft Fabric Analytics Engineer Associate, you are optimizing a data pipeline in Azure Data Factory that experiences performance bottlenecks during the data loading process. The issue stems from writing a large volume of small files to a Delta table. Considering the need for cost efficiency, compliance with data governance policies, and scalability, which of the following strategies would BEST optimize the writes to the Delta table and improve performance? (Choose one option.)
A
Increase the batch size of the data being written to the Delta table, assuming the data source allows for larger batches without impacting other processes.
B
Use the 'overwrite' mode for each write operation to replace the entire Delta table, ensuring minimal file operations but risking data loss in scenarios requiring incremental updates.
C
Implement a staging area to consolidate small files into larger files before writing to the Delta table, reducing the number of write operations and improving efficiency without compromising data integrity.
D
Disable the transaction log for the Delta table to eliminate the overhead of logging, significantly speeding up writes at the cost of losing the ability to track changes and perform time travel operations.