
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A text generator frequently produces generic or repetitive responses. The team wants more diverse options without making the output too chaotic. Which parameter should they modify?
A
Increase the temperature to 0.85
B
Set top-p to 0.98
C
Reduce temperature to 0.1
D
Reduce top-k to 20
Explanation:
Temperature is a parameter that controls the randomness of predictions in language models. It affects the probability distribution of the next token:
Low temperature (e.g., 0.1): Makes the model more deterministic and focused on the highest probability tokens, leading to more predictable and repetitive responses
High temperature (e.g., 0.85): Increases randomness by flattening the probability distribution, allowing the model to consider lower-probability tokens, resulting in more diverse and creative outputs
Why option A is correct:
Increasing temperature to 0.85 adds more randomness without making outputs completely chaotic
This helps break repetitive patterns while maintaining coherence
Why other options are incorrect:
Option B (top-p = 0.98): Top-p (nucleus sampling) filters tokens based on cumulative probability. 0.98 is already quite high and wouldn't significantly increase diversity
Option C (reduce temperature to 0.1): This would make outputs even more deterministic and repetitive
Option D (reduce top-k to 20): Top-k limits the number of tokens considered. Reducing it would restrict diversity rather than increase it
Key takeaway: Temperature is the primary parameter for controlling diversity vs. determinism in text generation.