Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
In the context of optimizing Spark data processing within Databricks for a Microsoft Azure environment, which factor plays a pivotal role in determining the ideal number of partitions for a DataFrame?
A
The size of the data file on Azure Blob Storage.
B
The default parallelism configuration of the Spark session.
C
The number of nodes in the Databricks cluster.
D
The network bandwidth available between Azure Blob Storage and Databricks.