
Ultimate access to all questions.
In the context of optimizing Spark performance for a machine learning project, which technique is used to combine multiple small tasks into larger tasks to reduce scheduling overhead and enhance processing efficiency?
A
Task Decomposition
B
Task Fusion
C
Task Aggregation
D
Task Parallelism