
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information. Which action will reduce these risks?
A
Create a prompt template that teaches the LLM to detect attack patterns.
B
Increase the temperature parameter on invocation requests to the LLM.
C
Avoid using LLMs that are not listed in Amazon SageMaker.
D
Decrease the number of input tokens on invocations of the LLM.
Explanation:
Correct Answer: A
Creating a prompt template that teaches the LLM to detect attack patterns is the most effective approach to mitigate prompt injection attacks and manipulation attempts.
Option B: Increase the temperature parameter
Option C: Avoid using LLMs not listed in Amazon SageMaker
Option D: Decrease the number of input tokens