In a scenario where you have a large PySpark DataFrame and you need to reduce the number of partitions efficiently without shuffling all the data, which partition hint would you use and why? Describe the process and implications of using coalesce versus repartition.