
Answer-first summary for fast verification
Answer: Write a MLflow PyFunc model that has a separate function to process the prompts
Option D is the correct answer because MLflow PyFunc models provide a systematic and flexible framework for implementing custom preprocessing logic before sending prompts to an LLM. This approach allows for clean separation of concerns, model versioning, and deployment consistency. Option A is incorrect as directly modifying the LLM's internal architecture is impractical and violates model integrity. Option B is incorrect because custom preprocessing is often necessary for domain-specific applications and can significantly improve performance. Option C is incorrect because postprocessing addresses output alignment but doesn't solve the need for prompt optimization before inference.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
What is a recommended approach for preprocessing prompts with custom logic before submitting them to a large language model (LLM)?
A
Directly modify the LLM’s internal architecture to include preprocessing steps
B
It is better not to introduce custom code to preprocess prompts as the LLM has not been trained with examples of the preprocessed prompts
C
Rather than preprocessing prompts, it’s more effective to postprocess the LLM outputs to align the outputs to desired outcomes
D
Write a MLflow PyFunc model that has a separate function to process the prompts