
Answer-first summary for fast verification
Answer: Use OpenSearch Serverless with properly sized capacity units and index refresh intervals
## Explanation The correct answer is **C** because: - **OpenSearch Serverless** is the underlying vector database used by Amazon Bedrock Knowledge Bases for storing and retrieving embeddings - **Properly sized capacity units** ensure the OpenSearch cluster has sufficient compute and memory resources to handle query loads efficiently - **Optimized index refresh intervals** help balance between data freshness and performance - longer refresh intervals can improve query speed by reducing indexing overhead Why the other options are incorrect: - **A) Reduce model temperature** - Temperature affects the randomness/creativity of model outputs, not retrieval performance - **B) Use a smaller embedding dimension size** - While smaller dimensions might slightly improve speed, they significantly reduce embedding quality and semantic understanding - **D) Add more S3 buckets** - S3 buckets store the source documents, but retrieval performance is primarily determined by the vector database (OpenSearch) performance The most effective approach is to optimize the OpenSearch Serverless configuration, which directly handles the vector similarity search operations for retrieval.
Author: Ritesh Yadav
Ultimate access to all questions.
A team notices slow query performance when retrieving embeddings from their Bedrock Knowledge Base. What is the most effective way to improve retrieval speed?
A
Reduce model temperature
B
Use a smaller embedding dimension size
C
Use OpenSearch Serverless with properly sized capacity units and index refresh intervals
D
Add more S3 buckets
No comments yet.