
Explanation:
Explanation:
Option D is correct because it provides the least operational overhead while effectively addressing the problem of retrieving contextually irrelevant documents. Here's why:
Native Integration: Amazon Bedrock Knowledge Bases has built-in reranking capabilities that can be configured directly within the service. This eliminates the need to deploy and manage separate infrastructure.
Managed Service: The reranking model is fully managed by AWS, requiring no operational maintenance, scaling, or monitoring from the customer.
Seamless Integration: Since the company is already using Amazon Bedrock Knowledge Bases, enabling reranking configuration is a simple configuration change rather than a complex architectural overhaul.
Contextual Assessment: The reranker model specifically addresses the problem of semantic similarity vs. contextual relevance by reordering results based on deeper contextual understanding.
Why other options are incorrect:
Option A: Requires deploying and managing a SageMaker endpoint, which involves operational overhead for model hosting, scaling, monitoring, and cost management.
Option B: Involves multiple AWS services (Comprehend, Textract, Neptune) with complex integration, significant operational overhead, and potential latency issues.
Option C: While using Bedrock APIs, this approach requires implementing a custom retrieval pipeline with multiple API calls, adding complexity and potential points of failure compared to the integrated solution in Option D.
The key requirement is "with the LEAST operational overhead", making the native, managed reranking capability within Amazon Bedrock Knowledge Bases the optimal solution.
Ultimate access to all questions.
No comments yet.
A company runs a Retrieval Augmented Generation (RAG) application that uses Amazon Bedrock Knowledge Bases to perform regulatory compliance queries. The application uses the RetrieveAndGenerateStream API. The application retrieves relevant documents from a knowledge base that contains more than 50,000 regulatory documents, legal precedents, and policy updates.
The RAG application is producing suboptimal responses because the initial retrieval often returns semantically similar but contextually irrelevant documents. The poor responses are causing model hallucinations and incorrect regulatory guidance. The company needs to improve the performance of the RAG application so it returns more relevant documents.
Which solution will meet this requirement with the LEAST operational overhead?
A
Deploy an Amazon SageMaker endpoint to run a fine-tuned ranking model. Use an Amazon API Gateway REST API to route requests. Configure the application to make requests through the REST API to rerank the results.
B
Use Amazon Comprehend to classify documents and apply relevance scores. Integrate the RAG application's reranking process with Amazon Textract to run document analysis. Use Amazon Neptune to perform graph-based relevance calculations.
C
Implement a retrieval pipeline that uses the Amazon Bedrock Knowledge Bases Retrieve API to perform initial document retrieval. Call the Amazon Bedrock Rerank API to rerank the results. Invoke the InvokeModelWithResponseStream operation to generate responses.
D
Use the latest Amazon reranker model through the reranking configuration within Amazon Bedrock Knowledge Bases. Use the model to improve document relevance scoring and to reorder results based on contextual assessments.