
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Question: 10
When serving an LLM application using Foundation Model APIs on Databricks, which of the following is a key consideration for ensuring efficient deployment and scalability?
A
Fine-tune the LLM on Databricks and register the model with MLflow for version control, then use a Databricks REST API endpoint to serve the model.
B
Ensure the LLM is fully retrained on your specific dataset before deploying it to Databricks, as pre-trained models are not suitable for Foundation Model APIs.
C
Store the LLM as a Delta table in Unity Catalog and query it in real-time using SQL endpoints.
D
The LLM should be downloaded locally and deployed on a custom virtual machine for scalability.
Explanation:
Option A is the correct answer because it follows Databricks' best practices for LLM deployment:
Why other options are incorrect:
Key benefits of Option A approach: