
Answer-first summary for fast verification
Answer: Design the application to store each incoming record in an Amazon DynamoDB table properly configured for the scale.Configure the DynamoDB Time to Live (TTL) feature to delete records older than 120 days.
B,MOST cost-effective is the key A - no value to have small csv in S3 since size limit B - might make sense with reserved capacity. but it needs 1 write units for 1kb. Becomes extremely expensive with high throughput. https://segment.com/blog/the-million-dollar-eng-problem/ If not provisioned enough capacity (that is charged even if not used) it's even more expensive 15,360 GB x 0.25 USD = 3,840.00 USD (Data storage cost) monthly Million of writes per minute =16667 per second = 66,668.00 WCUs = 6,232.45 USD monthly cost of reserved capacity (still not sure this is correct as it seems cheap) C - RDS starts to make sense it can go up to 80000 IOPS 15,360 GB x 0.125 USD x 1 instances = 1,920.00 USD (EBS Storage Cost) 16,667 Provisioned IOPS x 0.10 USD x 1 instances = 1,666.70 USD (EBS IOPS Cost) 1,920.00 USD + 1,666.70 USD = 3,586.70 USD Storage pricing (monthly): 3,586.70 USD D - S3 metadata search feature does not exist
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A Solutions Architect is designing the data storage and retrieval architecture for a new application that a company will be launching soon. The application is designed to ingest millions of small records per minute from devices all around the world. Each record is less than 4 KB in size and needs to be stored in a durable location where it can be retrieved with low latency. The data is ephemeral and the company is required to store the data for 120 days only, after which the data can be deleted.The Solutions Architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB. Which storage strategy is the MOST cost-effective and meets the design requirements?
A
Design the application to store each incoming record as a single .CSV file in an Amazon S3 bucket to allow for indexed retrieval. Configure a lifecycle policy to delete data older than 120 days.
B
Design the application to store each incoming record in an Amazon DynamoDB table properly configured for the scale.Configure the DynamoDB Time to Live (TTL) feature to delete records older than 120 days.
C
Design the application to store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 120 days.
D
Design the application to batch incoming records before writing them to an Amazon S3 bucket. Update the metadata for the object to contain the list of records in the batch and use the Amazon S3 metadata search feature to retrieve the data.Configure a lifecycle policy to delete the data after 120 days.