What is the optimal strategy for writing a 1 TB JSON dataset to Parquet with a target file size of 512 MB per partition, while avoiding data shuffling, when Delta Lake's built-in file-sizing features like Auto-Optimize and Auto-Compaction are unavailable?