
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A team wants their generative model to produce more creative marketing copy with varied sentence structures and surprising word choices. Which inference parameter should they adjust?
A
Decrease temperature to 0.1
B
Increase temperature to 1.2
C
Set top-k to 5
D
Set top-p to 0.2
Explanation:
Temperature is a key inference parameter that controls the randomness and creativity of a generative model's output:
Lower temperature values (e.g., 0.1): Make the model more deterministic and conservative, choosing the most probable tokens. This results in more predictable, safe, and repetitive outputs.
Higher temperature values (e.g., 1.2): Increase randomness and creativity by making the probability distribution more uniform. This encourages the model to explore less likely tokens, leading to more varied sentence structures, surprising word choices, and creative outputs.
Why not the other options?
Option A (Decrease temperature to 0.1): Would make the output more predictable and less creative - opposite of what's needed.
Option C (Set top-k to 5): Top-k sampling limits the vocabulary to the top k most likely tokens at each step. While this can reduce repetition, setting it to a very low value like 5 might actually restrict creativity too much.
Option D (Set top-p to 0.2): Top-p (nucleus sampling) limits sampling to tokens whose cumulative probability exceeds p. A low value like 0.2 would restrict the vocabulary significantly, potentially reducing creativity.
For marketing copy that needs to be creative with varied sentence structures and surprising word choices, increasing the temperature is the most appropriate adjustment.