
Answer-first summary for fast verification
Answer: Configure Spark‘s dynamic allocation feature to automatically scale based on the rate of incoming Kafka messages.
Option D is the most suitable for dynamically scaling Spark resources to match consumption needs in a scenario where Spark Streaming is consuming data from Kafka and the workload varies significantly throughout the day. Spark‘s dynamic allocation feature allows the Spark application to automatically adjust the number of executors based on the workload, ensuring optimal performance and resource utilization. Manual scaling (Option A) can be time-consuming and error-prone, while using Kafka‘s consumer group metrics (Option C) may not provide real-time scaling capabilities. Implementing a feedback loop with Spark‘s StreamingListener (Option B) can be complex and less efficient than Spark‘s built-in dynamic allocation feature.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
How can you dynamically scale Spark resources in a Spark Streaming application consuming data from Kafka, especially when the workload varies significantly throughout the day?
A
Manually scale the Spark cluster based on expected workload patterns.
B
Implement a feedback loop using Spark‘s StreamingListener to adjust streaming rates and executor allocation.
C
Utilize Kafka‘s consumer group metrics to trigger scaling actions in the Spark streaming application.
D
Configure Spark‘s dynamic allocation feature to automatically scale based on the rate of incoming Kafka messages.
No comments yet.