
Answer-first summary for fast verification
Answer: Implement autoscaling and dynamic resource allocation to adapt to varying data volumes.
Implementing autoscaling and dynamic resource allocation allows the system to scale resources up or down based on the current data volume, ensuring that the streaming job can handle sudden spikes without compromising performance.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
During the deployment of a Spark streaming job, you need to ensure that the job can handle sudden spikes in data volume without compromising performance. What strategies would you employ to achieve this?
A
Use a fixed number of executors to maintain consistent performance.
B
Implement autoscaling and dynamic resource allocation to adapt to varying data volumes.
C
Increase the batch interval to reduce the processing load.
D
Run the streaming job on a single, large cluster to handle spikes.