
Answer-first summary for fast verification
Answer: Develop a Retrieval Augmented Generation (RAG) agent by using Amazon Bedrock.
## Detailed Explanation To build a conversational AI assistant that uses LLMs and a knowledge base to answer customer questions about flight schedules, bookings, and payments with the **LEAST development effort**, the optimal solution is **B: Develop a Retrieval Augmented Generation (RAG) agent by using Amazon Bedrock**. ### Why Option B is Correct 1. **Amazon Bedrock provides fully managed access to foundation models**: Bedrock offers a choice of high-performing foundation models (like Anthropic Claude, Amazon Titan, or others) without requiring any training, fine-tuning, or infrastructure management. This eliminates the significant development effort associated with model training and deployment. 2. **RAG architecture integrates LLMs with knowledge bases**: The RAG approach allows you to connect foundation models to your company's knowledge base (e.g., flight schedules, booking policies, payment systems) to generate accurate, context-aware responses. This is crucial for providing up-to-date information without retraining models. 3. **Bedrock Agents simplify conversational AI development**: Amazon Bedrock Agents provide built-in capabilities for building generative AI applications that can run multi-step tasks across company systems and data sources. They handle conversation management, API integrations (like payment systems), and knowledge retrieval, significantly reducing custom coding requirements. 4. **Serverless and scalable architecture**: Bedrock is a fully managed service that automatically scales, eliminating infrastructure management overhead. It integrates seamlessly with other AWS services like Lambda, S3, and OpenSearch for knowledge base storage and retrieval. ### Why Other Options Are Less Suitable - **A: Train models on Amazon SageMaker Autopilot**: While Autopilot automates some aspects of model training, it still requires significant data preparation, feature engineering, and deployment effort. Training custom models from scratch is far more development-intensive than using pre-trained foundation models with RAG. - **C: Create a Python application by using Amazon Q Developer**: Amazon Q Developer is primarily a coding assistant tool, not a solution for building conversational AI applications. This would require building the entire chatbot infrastructure from scratch, including LLM integration, knowledge base management, and conversation logic, resulting in maximum development effort. - **D: Fine-tune models on Amazon SageMaker Jumpstart**: Fine-tuning models requires substantial effort in data collection, labeling, training, and deployment. While Jumpstart provides pre-built models, fine-tuning them for specific use cases is more complex and time-consuming than using Bedrock's RAG approach, which leverages existing foundation models without modification. ### Key Considerations for Minimal Development Effort The requirement emphasizes **LEAST development effort**, which prioritizes: 1. **Minimal coding requirements** - Bedrock Agents provide high-level abstractions 2. **No model training or fine-tuning** - Using pre-trained foundation models 3. **Built-in integration capabilities** - For knowledge bases and external systems 4. **Managed service benefits** - No infrastructure management Amazon Bedrock with RAG architecture addresses all these requirements most effectively, providing the fastest path to a production-ready conversational AI assistant with integrated knowledge retrieval capabilities.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
Which AWS solution requires the least development effort to build a text-based chatbot that uses large language models (LLMs) and a knowledge base to answer customer questions about flight schedules, bookings, and payments?
A
Train models on Amazon SageMaker Autopilot.
B
Develop a Retrieval Augmented Generation (RAG) agent by using Amazon Bedrock.
C
Create a Python application by using Amazon Q Developer.
D
Fine-tune models on Amazon SageMaker Jumpstart.