
Answer-first summary for fast verification
Answer: Create a multi-task job with separate tasks for each step, configure task dependencies to enforce the sequence, and use shared storage for data exchange between tasks.
Configuring a multi-task job in Databricks for an NLP model deployment ensures that each step is executed in the correct order and that data dependencies are managed efficiently. This approach also allows for easier debugging and maintenance.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are responsible for deploying a natural language processing (NLP) model in a Databricks environment. The deployment involves training the model, evaluating its performance, and serving predictions. Each step has dependencies on the previous steps. Describe how you would configure this deployment as a Databricks job.
A
Create a single notebook that includes all steps, execute the notebook as a job, and handle dependencies within the notebook code.
B
Create a multi-task job with separate tasks for each step, configure task dependencies to enforce the sequence, and use shared storage for data exchange between tasks.
C
Schedule each step as a separate job, manually trigger each job based on the completion of the previous step.
D
Combine all steps into a single Python script, upload the script to Databricks, and run it as a job.
No comments yet.