
Ultimate access to all questions.
A business has an application that creates a 15-minute weather prediction with a resolution of 1 billion distinct locations, each around 20 bytes in size (20 Gigabytes per forecast). Each hour, roughly 5 million times (1,400 requests per second) the prediction data is accessed worldwide, and up to ten times more during severe weather occurrences. Each update overwrites the predicted data. Users of the present weather forecast program anticipate receiving results to inquiries in less than two seconds. Which architecture satisfies the specified request rate and response time requirements?
A
Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.
B
Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in the CloudFront distribution.
C
Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes.
D
Store forecast locations in Amazon S3 as individual objects. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 object. Set the cache- control timeout for 15 minutes in the CloudFront distribution.