
Ultimate access to all questions.
Answer-first summary for fast verification
Answer: Increase the temperature to around 0.9 to encourage creative variation
## Explanation When using generative AI models like those in Amazon Bedrock, the **temperature** parameter controls the randomness and creativity of the outputs: - **Lower temperature values (e.g., 0.2)**: Make the model more deterministic and focused on the most likely next tokens, resulting in more predictable and repetitive outputs. - **Higher temperature values (e.g., 0.9)**: Introduce more randomness and creativity, allowing the model to explore less likely but potentially more original and diverse outputs. In this scenario, the marketing agency is experiencing **repetitive and unoriginal outputs**, which indicates the model is being too deterministic. Increasing the temperature to around 0.9 would encourage more creative variation in the generated brand-tagline ideas. **Why the other options are incorrect:** - **Option A (Lower temperature to 0.2)**: This would make the outputs even more repetitive and deterministic, worsening the problem. - **Option C (Reduce top-p to 0.3)**: Top-p (nucleus sampling) controls the diversity of token selection. Reducing it would also make outputs more deterministic, not more creative. - **Option D (Add stop sequences)**: Stop sequences control when generation stops, not the creativity or originality of the content. **Key takeaway**: Temperature is the primary parameter for controlling creativity vs. determinism in generative AI models. Higher temperatures promote creative variation, while lower temperatures produce more focused, predictable outputs.
Author: Jin H
No comments yet.
A marketing agency is using Amazon Bedrock to generate brand-tagline ideas. The outputs feel repetitive and lack originality. What adjustment should the data-science team make?
A
Lower the temperature to 0.2 for more precise outputs
B
Increase the temperature to around 0.9 to encourage creative variation
C
Reduce the top-p to 0.3 to increase determinism
D
Add stop sequences to control token length