
Answer-first summary for fast verification
Answer: Contextual / Retrieval-Augmented Prompting (RAG)
## Explanation This scenario describes **Retrieval-Augmented Generation (RAG)**, which is a contextual prompting approach. Here's why: ### Key Characteristics of RAG: 1. **External Knowledge Retrieval**: The system retrieves relevant information from an external source (the company handbook stored in S3 bucket) 2. **Context Injection**: Retrieved content is included in the prompt before generating the answer 3. **Dynamic Information Access**: Unlike static prompts, RAG dynamically fetches relevant information based on the query ### Why Other Options Are Incorrect: - **A) Few-shot Prompting**: Involves providing examples in the prompt, not retrieving external documents - **C) Zero-shot Prompting**: No examples or external context provided - the model answers based solely on its pre-trained knowledge - **D) Chain-of-Thought Prompting**: Involves breaking down reasoning step-by-step, not retrieving external documents ### Real-World Application: This approach is commonly used in enterprise applications where: - Company policies, manuals, or documentation need to be referenced - Information is too large to fit in a single prompt - Documents are frequently updated and need to be accessed dynamically The S3 bucket serves as the knowledge base, and the retrieval mechanism finds relevant policy paragraphs to provide context for accurate, up-to-date answers.
Author: Ritesh Yadav
Ultimate access to all questions.
An HR assistant model must answer employee policy questions based on the company handbook stored in an S3 bucket. The system retrieves relevant policy paragraphs and includes them in the prompt before generating the answer. Which prompting approach is being used?
A
Few-shot Prompting
B
Contextual / Retrieval-Augmented Prompting (RAG)
C
Zero-shot Prompting
D
Chain-of-Thought Prompting
No comments yet.