
Ultimate access to all questions.
As a Databricks Certified Data Engineer, you are tasked with configuring a Databricks cluster to use a specific Spark version to ensure compatibility with a particular Spark library. The project has strict compliance requirements and must adhere to cost constraints while ensuring high scalability. Given these constraints, which of the following steps would be the BEST approach to achieve this, and why? (Choose one option)
A
Automatically select the latest Spark version available in Databricks for all clusters to ensure you always have the newest features and compatibility, regardless of specific library requirements.
B
Manually select a Spark version that is documented to be compatible with the specific library in question, and configure the cluster to use this version, ensuring to review the Databricks runtime release notes for any known issues.
C
Use the default Spark version provided by Databricks without any modifications, assuming that the default settings will meet all project requirements, including library compatibility.
D
Build a custom Spark version that includes modifications to support the specific library, then configure the cluster to use this custom version, despite the potential increase in maintenance and support overhead.