
Answer-first summary for fast verification
Answer: Configure the application to automatically set the temperature parameter to 0 when submitting the prompt to the LLM.
## Explanation To achieve highly deterministic and stable responses from a large language model (LLM), the key is to minimize randomness in the model's output generation. In LLMs, the **temperature parameter** directly controls this randomness: - **Temperature = 0**: The model always selects the most probable next token, resulting in highly deterministic and consistent outputs. This minimizes variation and ensures stability across multiple runs with the same input. - **Temperature = 1**: This is typically the default setting, allowing for moderate randomness and creativity, which is unsuitable when determinism is required. - **Higher temperatures (>1)**: Increase randomness further, leading to more diverse but less predictable responses. **Analysis of options:** - **Option A (Correct)**: Configuring the application to set the temperature parameter to 0 directly addresses the requirement by forcing the LLM to use greedy decoding, where it always picks the highest-probability token. This is the most effective and standard method for maximizing determinism in LLM outputs. - **Option B and C**: Adding text like "make your response deterministic" to the prompt is unreliable. While prompt engineering can influence outputs, it does not guarantee determinism because LLMs may interpret such instructions variably, and underlying model randomness (controlled by temperature) still affects results. This approach is less optimal compared to directly configuring model parameters. - **Option D**: Setting the temperature to 1 introduces randomness, as it allows sampling from the probability distribution. This would make responses less stable and deterministic, contradicting the requirements. **Best Practice**: In AWS AI services like Amazon Bedrock or when using models through Amazon SageMaker, setting `temperature=0` (or `top_p=0` for nucleus sampling) is the recommended approach for deterministic behavior. This aligns with AWS documentation and industry standards for use cases requiring reproducible outputs, such as in automated systems, testing, or compliance-sensitive applications. Thus, **Option A** is the optimal solution as it directly controls the model's inherent randomness parameter to ensure maximum determinism and stability.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company needs to integrate a large language model (LLM) into its application to provide generative AI capabilities. The LLM's responses must be highly deterministic and stable.
Which approach fulfills these requirements?
A
Configure the application to automatically set the temperature parameter to 0 when submitting the prompt to the LLM.
B
Configure the application to automatically add "make your response deterministic" at the end of the prompt before submitting the prompt to the LLM.
C
Configure the application to automatically add "make your response deterministic" at the beginning of the prompt before submitting the prompt to the LLM.
D
Configure the application to automatically set the temperature parameter to 1 when submitting the prompt to the LLM.