
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
An HR assistant built on Bedrock starts giving imaginative but off-topic answers. The team wants more factual responses. What should they change?
A
Decrease temperature
B
Increase top-p
C
Raise max-tokens
D
Disable stop-sequences
Explanation:
Temperature is a parameter that controls the randomness of the model's responses:
Lower temperature (e.g., 0.1-0.3): Makes the model more deterministic and focused on the most likely responses, resulting in more factual, consistent, and on-topic answers
Higher temperature (e.g., 0.7-1.0): Increases creativity and randomness, which can lead to more imaginative but potentially off-topic responses
Why other options are incorrect:
B. Increase top-p: Top-p (nucleus sampling) controls the diversity of responses by limiting to the top probability mass. Increasing top-p would actually allow more diverse responses, potentially making the problem worse.
C. Raise max-tokens: This controls the maximum length of the response, not its factual accuracy or relevance.
D. Disable stop-sequences: Stop sequences are used to control when the model stops generating text. Disabling them wouldn't address the issue of imaginative, off-topic responses.
Best Practice: For factual, HR-related applications where accuracy and relevance are critical, use a lower temperature setting (0.1-0.3) to ensure the model stays focused on providing accurate, on-topic information.