
Answer-first summary for fast verification
Answer: Provide examples of text passages with corresponding positive or negative labels in the prompt followed by the new text passage to be classified.
## Detailed Explanation As an AWS Certified AI Practitioner expert, I'll analyze the prompt engineering strategies for sentiment analysis using an LLM on Amazon Bedrock. ### Optimal Strategy: Few-Shot Prompting (Option A) **Option A** recommends providing examples of text passages with corresponding positive or negative labels in the prompt, followed by the new text passage to be classified. This approach is optimal because: 1. **Contextual Learning**: LLMs excel at pattern recognition. By providing labeled examples, the model can infer the task (sentiment classification) and the desired output format (positive/negative labels). 2. **Task Specification**: The examples clearly define what constitutes "positive" and "negative" sentiment within the company's specific context, reducing ambiguity. 3. **Format Consistency**: Demonstrating the expected input-output pattern helps ensure the LLM produces responses in the correct format. 4. **Amazon Bedrock Best Practice**: This aligns with AWS documentation on effective prompt engineering for classification tasks, where few-shot examples improve accuracy and reliability. ### Analysis of Other Options **Option B**: Providing a detailed explanation of sentiment analysis and how LLMs work is suboptimal because: - LLMs don't require theoretical explanations; they need practical demonstrations of the task. - This approach wastes tokens without improving classification performance. - It may introduce unnecessary complexity without providing actionable guidance. **Option C**: Providing only the new text passage without context is ineffective because: - The LLM lacks clear instructions about the specific task. - Without examples, the model might produce inconsistent or incorrect output formats. - This zero-shot approach typically yields lower accuracy for specialized tasks like sentiment classification. **Option D**: Providing examples of unrelated tasks (like text summarization) is counterproductive because: - It introduces irrelevant context that may confuse the model. - The LLM might attempt to perform multiple tasks simultaneously, reducing sentiment classification accuracy. - This violates the principle of task-specific prompt design. ### Technical Considerations for Amazon Bedrock When implementing this on Amazon Bedrock: 1. **Example Selection**: Choose diverse, representative examples that cover edge cases. 2. **Prompt Structure**: Use clear formatting (e.g., "Text: [passage]\nSentiment: [label]") to separate examples from the target text. 3. **Model Selection**: Different foundation models on Bedrock may respond differently to few-shot prompts, requiring testing and optimization. This approach represents industry best practice for classification tasks with LLMs, balancing effectiveness with token efficiency.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
Which prompt engineering strategy should be used to classify text passages as positive or negative sentiment using a large language model (LLM) on Amazon Bedrock?
A
Provide examples of text passages with corresponding positive or negative labels in the prompt followed by the new text passage to be classified.
B
Provide a detailed explanation of sentiment analysis and how LLMs work in the prompt.
C
Provide the new text passage to be classified without any additional context or examples.
D
Provide the new text passage with a few examples of unrelated tasks, such as text summarization or question answering.