
Answer-first summary for fast verification
Answer: Include `torch` and the custom model in the cluster's library dependencies.
The optimal solution is **D. Include `torch` and the custom model in the cluster's library dependencies.** This method ensures that both the PyTorch library and the custom model are automatically installed and accessible across all notebooks and jobs on the cluster, streamlining the workflow and guaranteeing uniform access to essential tools. - **Option A**: Executing `%pip install torch` in a notebook installs PyTorch only within that notebook's environment, not across the entire cluster. - **Option B**: Switching to the Databricks Runtime for MLflow is unnecessary since it already includes PyTorch but lacks the custom model. - **Option C**: Setting the `MLFLOW_PYTORCH_VERSION` variable impacts only the MLflow environment, not the broader PySpark environment. Thus, adding the dependencies to the cluster's library is the most efficient and reliable approach.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A machine learning team is developing a project that involves integrating a custom PyTorch model into their Databricks ML pipeline. They aim to make the PyTorch library and the custom model accessible for training across all notebooks within the workspace. What is the best practice to achieve this?
A
Run %pip install torch in any notebook connected to the cluster to install PyTorch.
B
Modify the cluster to utilize the Databricks Runtime for MLflow.
C
Configure the MLFLOW_PYTORCH_VERSION variable within the cluster settings.
D
Include torch and the custom model in the cluster's library dependencies.
No comments yet.