
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
The LLM is producing creative outputs, but the team wants responses to stay more conservative and on-topic, especially for customer support use cases. What adjustment is recommended?
A
Increase top-p to 0.95
B
Lower temperature to 0.3
C
Set top-k to 150
D
Increase temperature to 1.0
Explanation:
Correct Answer: A - Increase top-p to 0.95
Why this is correct:
Top-p (nucleus sampling) controls the cumulative probability threshold for token selection. When top-p is set to 0.95, the model considers only the smallest set of tokens whose cumulative probability exceeds 0.95. This makes the output more focused and deterministic by limiting the model to high-probability tokens, which helps keep responses conservative and on-topic.
Why the other options are incorrect:
B. Lower temperature to 0.3 - While lowering temperature does make outputs more deterministic (temperature closer to 0 makes the model more conservative), top-p adjustment is more specifically recommended for controlling creativity while maintaining quality.
C. Set top-k to 150 - Top-k limits the number of tokens considered to the top k most likely tokens. Setting top-k to 150 actually allows more diversity in responses, which could increase creativity rather than reduce it.
D. Increase temperature to 1.0 - Increasing temperature makes outputs more random and creative, which is the opposite of what's needed for conservative, on-topic customer support responses.
Key Concepts:
Top-p (nucleus sampling): Filters tokens based on cumulative probability, keeping only the most likely tokens
Temperature: Controls randomness (lower = more deterministic, higher = more creative)
Top-k: Limits consideration to top k tokens
For customer support use cases where consistency and accuracy are critical, increasing top-p to a high value (like 0.95) helps ensure the model stays focused on high-probability, relevant responses.