
Answer-first summary for fast verification
Answer: Decrease the temperature value.
## Explanation To achieve more consistent responses from a large language model (LLM) on Amazon Bedrock for sentiment analysis, the company should **decrease the temperature value**. ### Why Decreasing Temperature Works The **temperature** parameter controls the randomness in an LLM's output generation. It operates on the probability distribution of the next token: - **Lower temperature values (e.g., 0.1-0.3)**: Make the model more deterministic by sharpening the probability distribution. The model becomes more likely to select the highest-probability tokens, leading to more predictable and consistent outputs for identical inputs. - **Higher temperature values (e.g., 0.7-1.0)**: Flatten the probability distribution, introducing more randomness and creativity. This increases output variability, which is undesirable for tasks requiring consistency. ### Application to Sentiment Analysis Sentiment analysis requires **stable, reliable classifications** of text sentiment (positive, negative, neutral). Consistency is critical because: 1. **Business decisions** may depend on sentiment trends 2. **Comparisons over time** require consistent measurement 3. **Automated systems** need predictable outputs for the same inputs By decreasing temperature, the LLM will produce nearly identical sentiment classifications for the same input prompt, meeting the company's requirements. ### Why Other Options Are Less Suitable - **B. Increase the temperature value**: This would increase randomness and variability, making responses less consistent—the opposite of what's needed. - **C. Decrease the length of output tokens**: While controlling output length can help with efficiency, it doesn't directly address consistency of sentiment classification. The model could still produce varying sentiment labels within the same token limit. - **D. Increase the maximum generation length**: This allows longer responses but doesn't improve consistency. In fact, longer outputs might introduce more variability in how sentiment is expressed. ### Best Practice Recommendation For sentiment analysis on Amazon Bedrock, start with a low temperature value (e.g., 0.2) and test with sample inputs to verify consistency. Adjust slightly if needed, but keep temperature below 0.5 to maintain the deterministic behavior required for reliable sentiment classification.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
To achieve more consistent responses from an LLM on Amazon Bedrock for sentiment analysis, which inference parameter should be adjusted?
A
Decrease the temperature value.
B
Increase the temperature value.
C
Decrease the length of output tokens.
D
Increase the maximum generation length.
No comments yet.