
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A content moderation team wants to reduce the number of rare and unexpected tokens generated by the model while still allowing some randomness. Which parameter should be adjusted?
A
Increase top-k to 100
B
Lower top-k to 20
C
Set temperature to 1.0
D
Increase top-p to 1.0
Explanation:
Correct Answer: B (Lower top-k to 20)
Top-k Sampling: This parameter limits the model's vocabulary to only the top 'k' most probable tokens at each step of generation. By lowering top-k to 20, you restrict the model to choose from only the 20 most likely tokens, which reduces the chance of selecting rare or unexpected tokens.
Top-p (Nucleus Sampling): This parameter selects from the smallest set of tokens whose cumulative probability exceeds 'p'. Increasing top-p to 1.0 would actually allow the model to consider ALL tokens, which would INCREASE the chance of rare/unexpected tokens.
Temperature: Controls the randomness of predictions by scaling the logits before applying softmax. Setting temperature to 1.0 is the default value that maintains the original probability distribution - it doesn't specifically reduce rare tokens.
Top-k sampling is an effective method to control vocabulary diversity. Lower values make outputs more predictable and less likely to contain rare tokens, while higher values increase diversity at the risk of generating less coherent or unexpected content.