
Ultimate access to all questions.
How can you dynamically allocate resources to Apache Spark jobs in Databricks to optimize processing time and cost when the jobs have varying computational requirements?
A
Writing custom logic within each Spark job to request additional resources from the Databricks REST API based on runtime metrics
B
Configuring each Spark job with a static allocation of resources based on peak usage estimates to ensure availability
C
Utilizing Databricks pools to pre-allocate a shared set of resources that Spark jobs can dynamically acquire as needed
D
Leveraging Databricks' autoscaling feature within Spark clusters to dynamically adjust resources based on the workload