
Ultimate access to all questions.
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company wants to classify the sentiment of text passages as positive or negative.
Which prompt engineering strategy meets these requirements?
Explanation:
Option A is correct because this approach uses few-shot learning or in-context learning, which is an effective prompt engineering strategy for classification tasks like sentiment analysis. By providing examples of text passages with their corresponding labels (positive/negative), you're teaching the LLM the pattern it needs to follow for the new text passage.
Why other options are incorrect:
Key Concept: In prompt engineering for classification tasks, few-shot learning (providing examples with labels) is typically more effective than zero-shot or unrelated examples because it demonstrates the exact pattern the model should follow.