
Answer-first summary for fast verification
Answer: Retrieval-augmented prompting
## Explanation **Retrieval-Augmented Generation (RAG)** is the prompting method being used in this scenario. Here's why: ### Key Concepts: 1. **Hallucinations in LLMs**: Large Language Models can sometimes generate plausible-sounding but incorrect or fabricated information. 2. **Retrieval-Augmented Generation (RAG)**: This approach combines: - **Retrieval**: Fetching relevant information from external knowledge bases - **Augmentation**: Adding this retrieved information to the prompt - **Generation**: Using the augmented prompt to generate more accurate responses ### Why Option B is Correct: - The developer is specifically adding **external facts retrieved from a knowledge base** into the prompt - This is the core mechanism of RAG - enhancing prompts with retrieved information to improve accuracy - RAG is particularly effective for reducing hallucinations because it grounds the model's responses in factual, verifiable information ### Other Options Explained: - **A. Chain-of-thought prompting**: Involves breaking down complex problems into intermediate reasoning steps, not about adding external facts - **C. Zero-shot prompting**: Asking the model to perform tasks without any examples, not about retrieving external information - **D. Self-evaluation prompting**: Having the model evaluate its own responses, not about incorporating external knowledge ### AWS Bedrock Context: In AWS Bedrock, RAG can be implemented using: - Knowledge Bases for Amazon Bedrock - Vector databases for storing and retrieving information - Embedding models to convert text into vector representations - Retrieval mechanisms to find relevant information This approach ensures that the model has access to up-to-date, domain-specific information, significantly reducing the likelihood of hallucinations.
Author: Ritesh Yadav
Ultimate access to all questions.
A developer wants to reduce hallucinations in a Bedrock-powered application by adding external facts retrieved from a knowledge base into the prompt. Which prompting method is being used?
A
Chain-of-thought prompting
B
Retrieval-augmented prompting
C
Zero-shot prompting
D
Self-evaluation prompting
No comments yet.