Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In the context of optimizing Spark performance for a large-scale machine learning project, which technique is used to store intermediate data in memory, thereby speeding up iterative algorithms and reducing disk I/O?
A
Data Shuffling
B
Disk Caching
C
In-Memory Computation
D
Data Replication