Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In a Spark job that involves extensive shuffle operations due to wide transformations, which combination of settings is most effective for reducing shuffle spill and network I/O?
A
Tuning spark.executor.memory and spark.shuffle.file.buffer
B
Adjusting spark.memory.fraction and spark.reducer.maxSizeInFlight
C
Increasing spark.sql.shuffle.partitions and enabling spark.shuffle.compress
D
Configuring spark.default.parallelism and spark.shuffle.spill.compress