
Ultimate access to all questions.
A machine learning team aims to utilize the Python library 'newpackage' across all their projects, sharing a common cluster. What is the most effective method to ensure 'newpackage' is accessible in all notebooks on this cluster?
A
Configure the cluster to utilize the Databricks Runtime for Machine Learning
B
Execute %pip install newpackage in any notebook connected to the cluster
C
Adjust the runtime-version variable in their Spark session to 'ml'
D
Incorporate /databricks/python/bin/pip install newpackage into the cluster's bash init script
E
It's impossible to make 'newpackage' available across the entire cluster