
Answer-first summary for fast verification
Answer: Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
## Explanation **Correct Answer: B** - Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table. **Why this is correct:** 1. **Minimal coding requirement**: The DynamoDB Export to S3 feature is a native AWS service that requires minimal coding - you just need to configure the export settings. 2. **Does not affect availability**: The export operation runs asynchronously in the background without impacting the availability of your DynamoDB table. 3. **Does not affect RCUs**: The export operation uses separate capacity and does not consume read capacity units (RCUs) from your table's provisioned capacity. It reads data from the DynamoDB table's underlying storage, not through the table's provisioned throughput. 4. **Continuous backups**: While the question mentions "continuous backups," the DynamoDB Export to S3 combined with Point-in-Time Recovery (PITR) provides comprehensive backup coverage: - **Export to S3**: Creates full backups of your data - **Point-in-Time Recovery**: Provides continuous backup protection with 35-day retention, allowing you to restore to any point in time within that period **Why other options are incorrect:** **A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.** - This requires significant coding and infrastructure management - EMR clusters are expensive and complex to maintain - Does not meet the "minimal amount of coding" requirement **C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket.** - This requires significant coding to implement the Lambda function - DynamoDB Streams consume write capacity units (WCUs), not RCUs, but the solution is more complex than necessary - While it provides real-time backup, it doesn't meet the "minimal coding" requirement **D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.** - This requires coding the Lambda function - The Lambda function would need to use DynamoDB operations that would consume RCUs - More complex than the native export feature **Key AWS Concepts:** - **DynamoDB Export to S3**: A serverless, managed export service that doesn't consume table capacity - **Point-in-Time Recovery (PITR)**: Provides continuous backups with 35-day retention - **Read Capacity Units (RCUs)**: The provisioned throughput for read operations on DynamoDB tables This solution provides the simplest, most cost-effective approach that meets all requirements with minimal coding effort.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
A gaming company uses Amazon DynamoDB to store user information such as geographic location, player data, and leaderboards. The company needs to configure continuous backups to an Amazon S3 bucket with a minimal amount of coding. The backups must not affect availability of the application and must not affect the read capacity units (RCUs) that are defined for the table.
Which solution meets these requirements?
A
Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B
Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
C
Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket.
D
Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.