
Answer-first summary for fast verification
Answer: Contextual / Retrieval-Augmented Prompting (RAG)
## Explanation This scenario describes **Retrieval-Augmented Generation (RAG)**, which is a contextual prompting approach. Here's why: ### Key Characteristics of RAG: - **External Knowledge Retrieval**: The system retrieves relevant information from an external source (S3 bucket containing company handbook) - **Context Integration**: Retrieved policy paragraphs are included in the prompt before generating the answer - **Dynamic Context**: The model uses up-to-date, specific information rather than relying solely on its pre-trained knowledge ### Comparison with Other Options: - **A) Few-shot Prompting**: Involves providing examples in the prompt, not retrieving external documents - **C) Zero-shot Prompting**: No examples or external context provided - just the question - **D) Chain-of-Thought Prompting**: Focuses on step-by-step reasoning, not document retrieval ### Why RAG is Ideal for This Use Case: 1. **Accuracy**: Ensures answers are based on the latest company policies 2. **Scalability**: Can handle large document repositories 3. **Maintainability**: Policies can be updated in S3 without retraining the model 4. **Transparency**: Shows which specific policy sections were used for the answer This approach is particularly valuable for HR systems where policy accuracy and compliance are critical.
Author: Ritesh Yadav
Ultimate access to all questions.
An HR assistant model must answer employee policy questions based on the company handbook stored in an S3 bucket. The system retrieves relevant policy paragraphs and includes them in the prompt before generating the answer. Which prompting approach is being used?
A
Few-shot Prompting
B
Contextual / Retrieval-Augmented Prompting (RAG)
C
Zero-shot Prompting
D
Chain-of-Thought Prompting
No comments yet.