
Answer-first summary for fast verification
Answer: Use %pip install in a notebook cell
To install a Python package scoped at the notebook level to all nodes in the active cluster, the correct method is using `%pip install` in a notebook cell. This installs the package in the current Spark session, making it available only to the notebook's session and not cluster-wide. - **Option A** involves activating a virtual environment, which is not a standard notebook-scoped installation method in Databricks. - **Option B** installs libraries cluster-wide via the cluster UI, which is not notebook-scoped. - **Option D** uses `%sh pip install`, which installs packages system-wide on worker nodes, affecting other notebooks and potentially causing permission issues. - **Option C** (correct) uses `%pip install`, which isolates the package to the notebook's execution context.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
What is the method for installing a Python package at the notebook level that will be available on all nodes of the active cluster?
A
Run source env/bin/activate in a notebook setup script
B
Install libraries from PyPI using the cluster UI
C
Use %pip install in a notebook cell
D
Use %sh pip install in a notebook cell
No comments yet.