
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Which technique is most effective for reducing hallucinations in large language models?
A
Self-consistency decoding
B
Few-shot prompting
C
Setting temperature to 0 and specifying strict output format
D
Chain-of-thought prompting
Explanation:
Explanation:
Setting temperature to 0 and specifying strict output format is the most effective technique for reducing hallucinations in large language models because:
Temperature = 0: This setting makes the model deterministic and always selects the highest probability token, reducing randomness and variability that can lead to hallucinations.
Strict output format: By specifying exactly how the output should be structured (e.g., JSON format, specific templates, or constrained formats), you guide the model to produce more reliable and structured responses.
Other techniques comparison:
Self-consistency decoding (A): Generates multiple reasoning paths and takes the majority vote, which improves reasoning but doesn't specifically target hallucination reduction.
Few-shot prompting (B): Provides examples to guide the model, which helps with task understanding but doesn't directly control hallucination.
Chain-of-thought prompting (D): Encourages step-by-step reasoning, which improves reasoning quality but doesn't inherently prevent hallucinations.
Hallucination control: The combination of deterministic output (temperature=0) and format constraints provides the strongest guardrails against the model generating fabricated or incorrect information.