
Answer-first summary for fast verification
Answer: Decrease the temperature inference parameter for the model.
## Explanation of the Correct Answer **C. Decrease the temperature inference parameter for the model** is the correct answer. ### Why Option C is Optimal: The temperature parameter in LLMs controls the randomness of the model's output generation. It's a hyperparameter that affects the probability distribution of the next token prediction: - **Lower temperature values (e.g., 0.1-0.5)**: Make the model more deterministic and conservative. The model becomes more focused on high-probability tokens, reducing creative but potentially incorrect outputs. This directly decreases hallucinations by making the model "stick to the facts" it has learned. - **Higher temperature values (e.g., 0.7-1.0)**: Increase randomness and creativity, which can lead to more diverse but potentially less accurate outputs, including hallucinations. Adjusting the temperature is a standard, immediate technique for reducing hallucinations during inference without retraining the model or changing the underlying architecture. ### Why Other Options Are Less Suitable: **A. Set up Agents for Amazon Bedrock to supervise the model training**: While Agents for Amazon Bedrock can help orchestrate workflows and integrate with knowledge bases, they are not primarily designed to reduce hallucinations during training. This approach adds complexity without directly addressing the core issue of output randomness. **B. Use data pre-processing and remove any data that causes hallucinations**: This is impractical because: 1. It's difficult to identify which specific data points "cause" hallucinations 2. Hallucinations often arise from model architecture and inference parameters rather than specific training data 3. Even with perfect data, models can still hallucinate due to their probabilistic nature **D. Use a foundation model (FM) that is trained to not hallucinate**: No foundation model is completely immune to hallucinations. While some models may be better calibrated than others, hallucinations are an inherent challenge in generative AI. This option suggests a non-existent solution and doesn't provide a practical, immediate mitigation strategy. ### Best Practice Context: In AWS AI/ML practice, adjusting inference parameters like temperature is a recommended first step when addressing hallucination issues. This approach is: - **Immediate**: Can be implemented without model retraining - **Controllable**: Allows fine-tuning of the creativity-factuality balance - **Cost-effective**: Doesn't require additional infrastructure or data processing Other complementary approaches might include prompt engineering, retrieval-augmented generation (RAG), or fine-tuning with reinforcement learning from human feedback (RLHF), but decreasing temperature is the most direct and practical single action from the given options.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
A company's large language model (LLM) is producing hallucinations.
What methods can the company implement to reduce these hallucinations?
A
Set up Agents for Amazon Bedrock to supervise the model training.
B
Use data pre-processing and remove any data that causes hallucinations.
C
Decrease the temperature inference parameter for the model.
D
Use a foundation model (FM) that is trained to not hallucinate.