Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
When encountering OutOfMemory errors during a large join operation in Spark, which configuration adjustment specifically targets this issue without unnecessarily allocating extra resources?
A
Set spark.memory.fraction to a higher value to allocate more memory for shuffle operations.
B
Increase spark.driver.memory to provide more memory for join operations.
C
Adjust spark.sql.autoBroadcastJoinThreshold to control the size of data being broadcasted.
D
Decrease spark.executor.memory to force more frequent garbage collection.