
Answer-first summary for fast verification
Answer: Log the model using MLflow during training, directly register the model to Unity Catalog using the MLflow API, and start a serving endpoint
Option B is the correct answer as it outlines the most straightforward and integrated approach for deploying models on Databricks. Using MLflow to log the model during training and directly register it to Unity Catalog leverages Databricks' native MLOps capabilities, providing versioning, reproducibility, and seamless integration with serving endpoints. This approach is more efficient than manually handling model artifacts (Option A), building custom Docker containers (Option C), or creating Flask applications (Option D), which introduce unnecessary complexity and maintenance overhead. The community discussion confirms this, with 100% consensus on B and emphasis on its efficiency and simplicity.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A Generative AI Engineer has already trained an LLM on Databricks, and the model is now ready for deployment. Which of the following steps correctly outlines the easiest process for deploying a model on Databricks?
A
Log the model as a pickle object, upload the object to Unity Catalog Volume, register it to Unity Catalog using MLflow, and start a serving endpoint
B
Log the model using MLflow during training, directly register the model to Unity Catalog using the MLflow API, and start a serving endpoint
C
Save the model along with its dependencies in a local directory, build the Docker image, and run the Docker container
D
Wrap the LLM’s prediction function into a Flask application and serve using Gunicorn
No comments yet.