
Answer-first summary for fast verification
Answer: Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
## Explanation **Option C is the correct answer** because it provides the best combination of sub-millisecond latency for frequently accessed data and efficient historical querying with minimal operational overhead. ### Analysis of Requirements: 1. **Sub-millisecond latency for data reads** - This is critical for multiplayer gaming applications where real-time responsiveness is essential. 2. **One-time queries on historical data** - Need ability to analyze past data for analytics, reporting, or debugging. 3. **Least operational overhead** - The solution should be serverless or managed as much as possible. ### Why Option C is Best: **For sub-millisecond latency:** - **DynamoDB with DAX** provides single-digit millisecond to microsecond latency for read operations - DAX is a fully managed, in-memory cache for DynamoDB that reduces read latency from milliseconds to microseconds - This is ideal for gaming applications where fast data access is critical **For historical queries:** - **DynamoDB table export** is a managed service that automatically exports data to S3 - **Amazon Athena** allows SQL queries on data in S3 without managing infrastructure - This combination provides serverless analytics on historical data **Minimal operational overhead:** - All components are fully managed AWS services - No need to write custom scripts or manage infrastructure - DynamoDB export and Athena are serverless ### Why Other Options Are Not Optimal: **Option A (Amazon RDS):** - RDS provides millisecond latency but not sub-millisecond - Requires custom scripting for data export (increases operational overhead) - Not optimized for gaming workloads with high read/write throughput **Option B (Direct S3 storage):** - S3 does not provide sub-millisecond latency (typically 100-200ms for first byte) - Completely unsuitable for real-time gaming data access - Only addresses historical querying requirement **Option D (DynamoDB with Kinesis):** - Provides sub-millisecond latency with DynamoDB - However, Kinesis Data Streams + Firehose adds complexity and operational overhead - Requires managing streaming infrastructure - More expensive and complex than simple DynamoDB export ### Key AWS Services Used in Option C: 1. **Amazon DynamoDB** - NoSQL database for gaming data with single-digit millisecond latency 2. **DynamoDB Accelerator (DAX)** - In-memory cache for microsecond read latency 3. **DynamoDB Table Export** - Managed service for exporting data to S3 4. **Amazon S3** - Storage for historical data 5. **Amazon Athena** - Serverless SQL query service for S3 data This architecture provides the optimal balance of performance for real-time gaming and analytics capabilities for historical data with minimal operational management.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
A company hosts a multiplayer gaming application on AWS. The company wants the application to read data with sub-millisecond latency and run one-time queries on historical data.
Which solution will meet these requirements with the LEAST operational overhead?
A
Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the data to an Amazon S3 bucket.
B
Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for long-term storage. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
C
Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
D
Use Amazon DynamoDB for data that is frequently accessed. Turn on streaming to Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to read the data from Kinesis Data Streams. Store the records in an Amazon S3 bucket.