
Ultimate access to all questions.
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent responses to the same input prompt. Which adjustment to an inference parameter should the company make to meet these requirements?
Explanation:
Correct Answer: A. Decrease the temperature value.
Why this is correct:
Temperature parameter controls randomness: In LLM inference parameters, temperature controls the randomness of the model's output. A lower temperature value (closer to 0) makes the model more deterministic and consistent, while a higher temperature value (closer to 1) makes the output more creative and varied.
Consistency requirement: The company needs "more consistent responses to the same input prompt." This means they want the LLM to produce similar outputs when given identical prompts, which requires reducing randomness.
Sentiment analysis context: For sentiment analysis tasks, consistency is particularly important because you want the same text to be classified the same way each time, not with varying sentiment scores.
Why other options are incorrect:
B. Increase the temperature value: This would make the model more creative and less consistent, producing more varied responses to the same prompt.
C. Decrease the length of output tokens: This controls the maximum length of the output but doesn't directly affect consistency of responses. It might truncate outputs but won't make them more consistent.
D. Increase the maximum generation length: This allows longer outputs but doesn't improve consistency. In fact, longer outputs might have more variation.
Technical details: