
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A financial services firm wants to store and query millions of embeddings from customer documents to power an AI assistant built on Amazon Bedrock. They need full control over database configuration. Which AWS service should they use to self-manage the embedding vector database?
A
Amazon RDS
B
Amazon DynamoDB
C
Amazon OpenSearch Service
D
Amazon Aurora
E
Amazon Neptune
F
Amazon MemoryDB for Redis
Explanation:
Amazon OpenSearch Service is the correct choice for this scenario because:
Vector Database Capabilities: OpenSearch supports vector similarity search, which is essential for storing and querying embeddings efficiently
Full Control: As a self-managed service, OpenSearch allows complete control over database configuration, clustering, and performance tuning
Scalability: Can handle millions of embeddings with distributed architecture
Integration with Amazon Bedrock: Works well with AI/ML services for embedding-based applications
Financial Services Compliance: Supports security features needed for financial data
Other options are less suitable:
Amazon RDS: Traditional relational database, not optimized for vector operations
Amazon DynamoDB: NoSQL database, lacks native vector search capabilities
Amazon Aurora: Relational database, not designed for vector embeddings
Amazon Neptune: Graph database, not optimized for vector similarity search
Amazon MemoryDB for Redis: In-memory database, can support vectors but OpenSearch is more purpose-built for this use case
OpenSearch provides the best combination of vector search capabilities, scalability, and configuration control for embedding storage and retrieval.