
Ultimate access to all questions.
A Generative AI Engineer is using an LLM to classify mushroom species from text descriptions. The model is accurate but its responses include unwanted reasoning text alongside the label. The engineer has a confirmed list of valid labels and wants the output to contain only the label itself.
What should they do to get the LLM to produce this desired output?
A
Use few shot prompting to instruct the model on expected output format
B
Use zero shot prompting to instruct the model on expected output format
C
Use zero shot chain-of-thought prompting to prevent a verbose output format
D
Use a system prompt to instruct the model to be succinct in its answer