Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
What is a recommended approach for preprocessing prompts with custom logic before submitting them to a large language model (LLM)?
A
Directly modify the LLM’s internal architecture to include preprocessing steps
B
It is better not to introduce custom code to preprocess prompts as the LLM has not been trained with examples of the preprocessed prompts
C
Rather than preprocessing prompts, it’s more effective to postprocess the LLM outputs to align the outputs to desired outcomes
D
Write a MLflow PyFunc model that has a separate function to process the prompts