Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
When deploying Spark on Kubernetes with highly variable workloads, which strategy dynamically allocates resources to Spark executors to match real-time demand without over-provisioning?
A
Configure horizontal pod autoscaling for Spark executor pods based on CPU usage.
B
Employ Spark‘s dynamic resource allocation feature with external shuffle service enabled in Kubernetes.
C
Predefine resource quotas for namespaces and use Spark‘s static allocation mode.
D
Manually adjust the number of executors using the Spark UI based on workload.