
Answer-first summary for fast verification
Answer: Fine-tune the model on your data, hoping it will learn what is appropriate and not
The question asks which approach should NOT be used to mitigate hallucination or confidential data leakage. Option B (Fine-tune the model on your data, hoping it will learn what is appropriate and not) is the correct answer because fine-tuning alone does not reliably prevent hallucinations or data leakage. Fine-tuning can actually increase hallucination risks when trained on limited data, and confidential information in the training data may still be reproduced. The community discussion shows 75% support for B, with comments noting that 'hoping for the best is not the way' and fine-tuning can increase hallucination. Other options are valid mitigation strategies: A (guardrails) filters outputs, C (data access limits) restricts sensitive data exposure, and D (system prompts) guides model behavior.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A Generative AI Engineer has developed an LLM application to answer questions about internal company policies. The engineer must ensure the application does not hallucinate or leak confidential data.
Which of the following approaches should NOT be used to mitigate hallucination or the leakage of confidential data?
A
Add guardrails to filter outputs from the LLM before it is shown to the user
B
Fine-tune the model on your data, hoping it will learn what is appropriate and not
C
Limit the data available based on the user’s access level
D
Use a strong system prompt to ensure the model aligns with your needs.
No comments yet.