Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
How can you dynamically scale Spark resources in a Spark Streaming application consuming data from Kafka, especially when the workload varies significantly throughout the day?
A
Manually scale the Spark cluster based on expected workload patterns.
B
Implement a feedback loop using Spark‘s StreamingListener to adjust streaming rates and executor allocation.
C
Utilize Kafka‘s consumer group metrics to trigger scaling actions in the Spark streaming application.
D
Configure Spark‘s dynamic allocation feature to automatically scale based on the rate of incoming Kafka messages.