Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In the context of optimizing Spark performance for a machine learning project, which technique is used to combine multiple small tasks into larger tasks to reduce scheduling overhead and enhance processing efficiency?
A
Task Decomposition
B
Task Fusion
C
Task Aggregation
D
Task Parallelism