
Answer-first summary for fast verification
Answer: Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock.
## Explanation **Correct Answer: D** - Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock. ### Why Option D is Correct: 1. **Cost-Effectiveness**: Amazon Bedrock knowledge bases are designed specifically for Retrieval-Augmented Generation (RAG) applications. They provide an efficient way to retrieve relevant information from documents without needing to include entire documents in every prompt. 2. **Efficient Context Management**: Knowledge bases automatically: - Convert PDF files into embeddings - Store them in a vector database - Retrieve only the most relevant sections when processing user queries - This avoids the high token costs of including entire PDFs in prompts 3. **Scalability**: As the number of PDF files grows, knowledge bases handle the scaling efficiently without increasing costs linearly. ### Why Other Options Are Not Most Cost-Effective: **Option A & B (Prompt Engineering with PDF context)**: - **Token Cost**: Including PDF content directly in prompts consumes a large number of tokens - **Context Window Limits**: LLMs have token limits (typically 4K-128K tokens), and entire PDFs may exceed these limits - **Inefficient**: Every query would need to include the PDF content, leading to redundant processing **Option C (Fine-tuning a model)**: - **High Initial Cost**: Fine-tuning requires significant computational resources and is expensive - **Maintenance Cost**: Each time manuals are updated, the model would need to be re-fine-tuned - **Overkill for This Use Case**: Fine-tuning is better for changing model behavior or style, not for adding factual knowledge ### Key Benefits of Amazon Bedrock Knowledge Base: - **Automatic Document Processing**: Handles PDF parsing, chunking, and embedding generation - **Smart Retrieval**: Uses semantic search to find the most relevant information - **Cost Optimization**: Only retrieves and processes relevant document sections - **Real-time Updates**: Easy to add or update documents without retraining models This approach provides the best balance of accuracy, performance, and cost for a chat interface with product manuals.
Author: Ritesh Yadav
Ultimate access to all questions.
No comments yet.
A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are stored as PDF files. Which solution meets these requirements MOST cost-effectively?
A
Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock.
B
Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock.
C
Use all the PDF documents to fine-tune a model with Amazon Bedrock. Use the fine-tuned model to process user prompts.
D
Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock.