A data engineering team is tasked with converting a 1 TB JSON dataset into Parquet format. The goal is to produce part-files that are approximately 512 MB each. Given that built-in Databricks features like Auto-Optimize and Auto-Compaction are not available for this workload, which strategy provides the most efficient performance by ensuring the target file size is met without triggering a data shuffle? | Databricks Certified Data Engineer - Professional Quiz - LeetQuiz