
Explanation:
Option A is correct because it provides the most comprehensive and integrated solution with the least operational overhead:
Amazon Bedrock Knowledge Bases with source attribution enabled - This directly addresses the requirement to cite specific sources and link data claims to source documents. Knowledge Bases automatically handle document ingestion, chunking, embedding, and retrieval with built-in source attribution.
Anthropic Claude Messages API with RAG - This enables retrieval-augmented generation which combines the company's data sources with the model's reasoning capabilities.
High-relevance thresholds for source documents - This ensures that only highly relevant sources are used, improving accuracy and potentially reducing latency.
Amazon S3 for auditing - Provides a simple, cost-effective storage solution for reasoning and citations.
Why other options are less optimal:
Option B: While extended thinking provides reasoning, it doesn't inherently handle source citation and retrieval from company data sources. DynamoDB adds complexity compared to S3.
Option C: Using SageMaker with custom models introduces significant operational overhead for deployment, scaling, and maintenance. The separate RDS database for citations adds complexity.
Option D: Chain-of-thought reasoning provides step-by-step reasoning, but custom retrieval tracking with Knowledge Bases API requires more development effort compared to the integrated solution in Option A.
Key benefits of Option A:
Ultimate access to all questions.
No comments yet.
A company is building an AI advisory application by using Amazon Bedrock. The application will provide recommendations to customers. The company needs the application to explain its reasoning process and cite specific sources for data. The application must retrieve information from company data sources and show step-by-step reasoning for recommendations. The application must also link data claims to source documents and maintain response latency under 3 seconds.
Which solution will meet these requirements with the LEAST operational overhead?
A
Use Amazon Bedrock Knowledge Bases with source attribution enabled. Use the Anthropic Claude Messages API with RAG to set high-relevance thresholds for source documents. Store reasoning and citations in Amazon S3 for auditing purposes.
B
Use Amazon Bedrock with Anthropic Claude models and extended thinking. Configure a 4,000-token thinking budget. Store reasoning traces and citations in Amazon DynamoDB for auditing purposes.
C
Configure Amazon SageMaker AI with a custom Anthropic Claude model. Use the model's reasoning parameter and AWS Lambda to process responses. Add source citations from a separate Amazon RDS database.
D
Use Amazon Bedrock with Anthropic Claude models and chain-of-thought reasoning. Configure custom retrieval tracking with the Amazon Bedrock Knowledge Bases API. Use Amazon CloudWatch to monitor response latency metrics.