Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
During the deployment of a Spark streaming job, you need to ensure that the job can handle sudden spikes in data volume without compromising performance. What strategies would you employ to achieve this?
A
Use a fixed number of executors to maintain consistent performance.
B
Implement autoscaling and dynamic resource allocation to adapt to varying data volumes.
C
Increase the batch interval to reduce the processing load.
D
Run the streaming job on a single, large cluster to handle spikes.