
Ultimate access to all questions.
Which approach should a company implement to mitigate the risk of a large language model (LLM) being exploited through prompt engineering to execute harmful actions or leak confidential data in a conversational agent?
A
Create a prompt template that teaches the LLM to detect attack patterns.
B
Increase the temperature parameter on invocation requests to the LLM.
C
Avoid using LLMs that are not listed in Amazon SageMaker.
D
Decrease the number of input tokens on invocations of the LLM.