
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Q4 – A law firm’s Bedrock bot must produce consistent phrasing across contracts. What setup works best?
A
temperature = 0.2
B
temperature = 0.9
C
top-p = 1.0
D
max-tokens = 1000
Explanation:
Correct Answer: A (temperature = 0.2)
Why temperature = 0.2 is correct:
Temperature parameter controls randomness in AI model outputs:
Lower temperature values (closer to 0) make outputs more deterministic and consistent
Higher temperature values (closer to 1) make outputs more creative and varied
For legal contracts, consistency is critical:
Legal documents require precise, standardized phrasing
Contract language must be predictable and uniform across documents
Minor variations in wording could have significant legal implications
Temperature = 0.2 provides:
High determinism and low randomness
Consistent phrasing across multiple contract generations
Reduced risk of unintended variations in legal language
Why other options are incorrect:
B (temperature = 0.9): Too high for legal documents - would produce creative, varied outputs unsuitable for standardized contracts
C (top-p = 1.0): Uses all possible tokens, allowing too much variability rather than focusing on most probable outputs
D (max-tokens = 1000): Controls output length, not consistency of phrasing
Key Concept: In AWS Bedrock and other AI services, the temperature parameter is crucial for controlling output consistency vs. creativity. For applications requiring standardized outputs (like legal contracts, technical documentation, or standardized responses), lower temperature values are essential.