
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A gaming company uses Amazon DynamoDB to store user information such as geographic location, player data, and leaderboards. The company needs to configure continuous backups to an Amazon S3 bucket with a minimal amount of coding. The backups must not affect availability of the application and must not affect the read capacity units (RCUs) that are defined for the table.
Which solution meets these requirements?
A
Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B
Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
C
Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket.
D
Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.
Explanation:
Correct Answer: B - Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
Why this is correct:
Minimal coding requirement: The DynamoDB Export to S3 feature is a native AWS service that requires minimal coding - you just need to configure the export settings.
Does not affect availability: The export operation runs asynchronously in the background without impacting the availability of your DynamoDB table.
Does not affect RCUs: The export operation uses separate capacity and does not consume read capacity units (RCUs) from your table's provisioned capacity. It reads data from the DynamoDB table's underlying storage, not through the table's provisioned throughput.
Continuous backups: While the question mentions "continuous backups," the DynamoDB Export to S3 combined with Point-in-Time Recovery (PITR) provides comprehensive backup coverage:
Why other options are incorrect:
A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket.
D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.
Key AWS Concepts:
This solution provides the simplest, most cost-effective approach that meets all requirements with minimal coding effort.