
Answer-first summary for fast verification
Answer: Employ Spark‘s dynamic resource allocation feature with external shuffle service enabled in Kubernetes.
The correct strategy for dynamically allocating resources to Spark executors in Kubernetes with highly variable workloads is to employ Spark‘s dynamic resource allocation feature with external shuffle service enabled in Kubernetes. This approach allows Spark to automatically adjust the number of executors based on workload requirements, ensuring optimal resource utilization. Predefining resource quotas and using static allocation may not adapt to real-time demand, leading to inefficiencies. Horizontal pod autoscaling based on CPU usage might not consider all relevant factors for resource allocation. Manual adjustments are not scalable or efficient for dynamic workloads. Enabling the external shuffle service improves performance by managing shuffle data externally, reducing executor load.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
When deploying Spark on Kubernetes with highly variable workloads, which strategy dynamically allocates resources to Spark executors to match real-time demand without over-provisioning?
A
Configure horizontal pod autoscaling for Spark executor pods based on CPU usage.
B
Employ Spark‘s dynamic resource allocation feature with external shuffle service enabled in Kubernetes.
C
Predefine resource quotas for namespaces and use Spark‘s static allocation mode.
D
Manually adjust the number of executors using the Spark UI based on workload.
No comments yet.