Ultimate access to all questions.
As a Microsoft Fabric Analytics Engineer Associate, you are optimizing a data pipeline in Azure Data Factory that experiences performance bottlenecks during the data loading process. The issue stems from writing a large volume of small files to a Delta table. Considering the need for cost efficiency, compliance with data governance policies, and scalability, which of the following strategies would BEST optimize the writes to the Delta table and improve performance? (Choose one option.)