A data engineering team needs to ingest a 1 TB JSON dataset and convert it into Parquet format with a target part-file size of approximately 512 MB. Given that Delta Lake features like Auto-Optimize are unavailable, how can they achieve this target size with optimal performance while strictly avoiding any data shuffling? | Databricks Certified Data Engineer - Professional Quiz - LeetQuiz