Ultimate access to all questions.
In the context of optimizing a PySpark DataFrame write operation to disk for subsequent read performance, consider the following scenario: You are tasked with writing a large DataFrame to disk in a manner that ensures optimal read performance for downstream processing. The DataFrame contains sensitive data that must be encrypted at rest, and the solution must comply with organizational policies that limit the maximum size of individual part-files to 128MB to facilitate efficient data processing. Which of the following approaches BEST meets these requirements? Choose the single best option.