
Answer-first summary for fast verification
Answer: 1. embedding model, 2. vector search, 3. context-augmented prompt, 4. response-generating LLM
## Explanation In a typical RAG (Retrieval-Augmented Generation) pipeline, the correct sequence is: 1. **Embedding Model**: The user's question is first converted into a vector representation using an embedding model 2. **Vector Search**: This vector is used to search a vector database for relevant documents/chunks 3. **Context-Augmented Prompt**: The retrieved relevant context is combined with the original question to create an augmented prompt 4. **Response-Generating LLM**: The final LLM generates a response based on the context-augmented prompt This sequence ensures that the system first understands the user's query (embedding), retrieves relevant information (vector search), prepares the input for the LLM (context augmentation), and finally generates the response. Option A correctly follows this standard RAG workflow pattern.
Author: LeetQuiz .
Ultimate access to all questions.
Question: 31
A company has a typical RAG-enabled, customer-facing chatbot on its website.
[Diagram showing flow: User Questions → Box 1 → Box 2 → Box 3 → Box 4 → Output]
Select the correct sequence of components a user's questions will go through before the final output is returned. Use the diagram above for reference.
A
B
C
D
No comments yet.