
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A chatbot is producing overly random responses with off-topic tokens. The team wants to keep responses relevant but still allow some variability. Which action provides the best balance?
A
Set temperature = 0.7
B
Set temperature = 0.0
Explanation:
Temperature is a parameter that controls the randomness of AI model outputs:
Temperature = 0.0: Produces deterministic, predictable responses by always selecting the most probable next token. This eliminates randomness but can make responses repetitive and less creative.
Temperature = 0.7: Provides a good balance between randomness and relevance. It allows some variability while keeping responses mostly on-topic.
Why temperature = 0.7 is correct:
The chatbot is currently producing overly random responses with off-topic tokens - this indicates the temperature is likely set too high (e.g., 1.0 or higher).
The team wants to keep responses relevant - lowering temperature helps with this.
The team still wants to allow some variability - temperature = 0.0 would eliminate all variability, making responses too predictable.
Temperature = 0.7 is a commonly recommended value that provides a good balance between creativity and coherence.
Temperature scale:
0.0-0.3: Very low randomness, highly predictable
0.4-0.7: Balanced randomness (recommended for most applications)
0.8-1.0: High randomness, more creative but less focused
>1.0: Very high randomness, often produces nonsensical or off-topic responses
By setting temperature to 0.7, the chatbot will produce more relevant responses while maintaining enough variability to avoid sounding robotic or repetitive.