
Ultimate access to all questions.
In the process of optimizing Spark performance for a machine learning project, which technique is used to precompute and store intermediate results, thereby reducing computational overhead in subsequent tasks?
A
Data Compression
B
Result Caching
C
Lazy Evaluation
D
Task Fusion