
Answer-first summary for fast verification
Answer: Ask users to indicate all scenarios where they expect concise responses versus verbose responses. Modify the application 's prompt to include these scenarios and their respective verbosity levels. Re-evaluate the verbosity of responses with updated prompts.
Option C is the optimal choice because it directly addresses the core issue of varying verbosity expectations based on question type through prompt engineering, which is both scalable and efficient. By asking users to define scenarios where they expect concise versus verbose responses and incorporating these into the application's prompt, the solution leverages the LLM's existing capabilities without requiring model retraining or complex infrastructure changes. This approach is scalable for 1,000 users, as prompt modifications can be applied universally with minimal overhead. In contrast, Option A relies on simplistic keyword-based routing, which may not capture nuanced user intent and lacks flexibility. Option B involves supervised fine-tuning, which is resource-intensive, less scalable, and risks overfitting to the provided examples. Option D focuses on model selection rather than addressing verbosity control directly, making it inefficient for solving the specific user feedback issue.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have deployed a conversational application using a large language model (LLM) for 1,000 users. User feedback indicates that while the responses are factually correct, users desire different levels of verbosity depending on the question type. Your goal is to make the model's responses more consistent with user expectations using a scalable solution. What should you do?
A
Implement a keyword-based routing layer. If the user's input contains the words "detailed" or "description," return a verbose response. If the user's input contains the word "fact." re-prompt the language model to summarize the response and return a concise response.
B
Ask users to provide examples of responses with the appropriate verbosity as a list of question and answer pairs. Use this dataset to perform supervised fine tuning of the foundational model. Re-evaluate the verbosity of responses with the tuned model.
C
Ask users to indicate all scenarios where they expect concise responses versus verbose responses. Modify the application 's prompt to include these scenarios and their respective verbosity levels. Re-evaluate the verbosity of responses with updated prompts.
D
Experiment with other proprietary and open-source LLMs. Perform A/B testing by setting each model as your application's default model. Choose a model based on the results.