Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
A machine learning team aims to utilize the Python library 'newpackage' across all their projects, sharing a common cluster. What is the most effective method to ensure 'newpackage' is accessible in all notebooks on this cluster?
A
Configure the cluster to utilize the Databricks Runtime for Machine Learning
B
Execute %pip install newpackage in any notebook connected to the cluster
%pip install newpackage
C
Adjust the runtime-version variable in their Spark session to 'ml'
D
Incorporate /databricks/python/bin/pip install newpackage into the cluster's bash init script
/databricks/python/bin/pip install newpackage
E
It's impossible to make 'newpackage' available across the entire cluster