
Ultimate access to all questions.
A Generative AI Engineer has developed an LLM application to answer questions about internal company policies. The engineer must ensure the application does not hallucinate or leak confidential data.
Which of the following approaches should NOT be used to mitigate hallucination or the leakage of confidential data?
A
Add guardrails to filter outputs from the LLM before it is shown to the user
B
Fine-tune the model on your data, hoping it will learn what is appropriate and not
C
Limit the data available based on the user’s access level
D
Use a strong system prompt to ensure the model aligns with your needs.