
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Which technique is most effective for reducing hallucinations in large language models?
A
Self-consistency decoding
B
Few-shot prompting
C
Setting temperature to 0 and specifying strict output format
D
Chain-of-thought prompting
Explanation:
Explanation:
Setting temperature to 0 and specifying strict output format is the most effective technique for reducing hallucinations in large language models because:
Temperature = 0: This setting makes the model deterministic and always selects the highest probability token, reducing randomness and variability that can lead to hallucinations.
Strict output format: By specifying exactly how the output should be structured (e.g., JSON format, specific templates, or constrained formats), you guide the model to produce more reliable and structured responses.
Other techniques comparison:
Hallucination control: The combination of deterministic output (temperature=0) and format constraints provides the strongest guardrails against the model generating fabricated or incorrect information.