
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
The team complains that the model sometimes ignores key details and produces too-short answers. They want slightly more exploration in token selection. Which parameter adjustment helps?
A
Increase top-p from 0.7 to 0.9
B
Decrease temperature to 0.1
C
Set top-k from 100 to 10
D
Reduce top-p from 0.9 to 0.4
Explanation:
Correct Answer: A (Increase top-p from 0.7 to 0.9)
Why this is correct:
Why other options are incorrect:
B (Decrease temperature to 0.1): Lower temperature makes the model more deterministic and conservative, which would likely make the problem worse by making the model even more likely to stick to the most probable tokens and produce shorter, more generic responses.
C (Set top-k from 100 to 10): Decreasing top-k from 100 to 10 restricts the model to consider only the top 10 most likely tokens, which reduces exploration and diversity, making the problem worse.
D (Reduce top-p from 0.9 to 0.4): Decreasing top-p restricts the token pool to a smaller set (only 40% of the probability mass), which reduces exploration and diversity, making the problem worse.
Key Concept:
For the specific problem of "ignoring key details and producing too-short answers," increasing top-p is the appropriate adjustment to encourage more exploration and potentially longer, more detailed responses.