
Answer-first summary for fast verification
Answer: Knowledge Bases for Amazon Bedrock (RAG)
## Explanation **Knowledge Bases for Amazon Bedrock (RAG)** is the correct approach because: 1. **RAG (Retrieval Augmented Generation)** is specifically designed to make external knowledge sources searchable through chatbots 2. **Knowledge Bases for Amazon Bedrock** allows you to connect foundation models to your company data stored in Amazon S3 3. The process involves: - Ingesting documents from S3 - Chunking them into smaller pieces - Creating vector embeddings - Storing them in a vector database - Enabling semantic search capabilities 4. When users ask questions, the system retrieves relevant information from the academic papers and uses it to generate accurate responses **Why other options are incorrect:** - **A) Ground Truth + Bedrock**: Ground Truth is for creating labeled datasets for training ML models, not for making documents searchable - **C) Bedrock Fine-Tuning**: Fine-tuning involves training a model on specific data to improve performance on particular tasks, but doesn't provide document search functionality - **D) Bedrock Guardrails**: Guardrails are for implementing safety controls and content filtering, not for document search capabilities This solution enables researchers to ask natural language questions about the academic papers and get accurate answers based on the content stored in S3.
Author: Jin H
Ultimate access to all questions.
No comments yet.
A university research lab stores large collections of academic papers in Amazon S3 and wants to make them searchable via a Bedrock chatbot. Which approach provides this functionality?
A
Ground Truth + Bedrock
B
Knowledge Bases for Amazon Bedrock (RAG)
C
Bedrock Fine-Tuning
D
Bedrock Guardrails