
Answer-first summary for fast verification
Answer: Use few shot prompting to instruct the model on expected output format
The correct answer is A because few-shot prompting provides concrete examples that demonstrate the exact desired output format (just the label without reasoning), which is more effective than zero-shot approaches for teaching specific formatting requirements. Option B (zero-shot prompting) is less reliable for enforcing strict output formats without examples. Option C (zero-shot chain-of-thought) would likely increase reasoning text rather than reduce it. Option D (system prompt) can help but is generally less precise than few-shot examples for enforcing specific output formats.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A Generative AI Engineer is using an LLM to classify mushroom species from text descriptions. The model is accurate but its responses include unwanted reasoning text alongside the label. The engineer has a confirmed list of valid labels and wants the output to contain only the label itself.
What should they do to get the LLM to produce this desired output?
A
Use few shot prompting to instruct the model on expected output format
B
Use zero shot prompting to instruct the model on expected output format
C
Use zero shot chain-of-thought prompting to prevent a verbose output format
D
Use a system prompt to instruct the model to be succinct in its answer