
Answer-first summary for fast verification
Answer: Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.
1. Explanation for Answer A: - Amazon Elasticsearch Service (Amazon ES) is a search and analytics engine that can handle large volumes of data and perform quick searches. It's suitable for applications that require fast and complex queries. - Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations closer to users, reducing latency and improving response times. - Amazon API Gateway serves as a managed API gateway that can handle large numbers of concurrent API calls efficiently. - AWS Lambda functions can be used to process and respond to queries without managing servers, scaling automatically with the number of requests. - Enabling API caching on the API Gateway stage with a cache-control timeout set for 15 minutes ensures that the most recent data is served to users, while also reducing the load on the backend services. - This design meets the required request rate and response time because it leverages the strengths of Amazon ES for data storage and search, CloudFront for caching and content delivery, API Gateway for managing API calls, and Lambda for serverless computation, all while ensuring data is updated and cached appropriately. 2. Explanation for other options: - Answer B: Amazon EFS is a file storage service that is not designed for high-throughput scenarios like this one. It may not be able to handle the large number of requests and the size of data being updated every 15 minutes. - Answer C: While using Lambda@Edge for caching at edge locations can reduce latency, Amazon ES is not the best choice for storing the forecast data due to its search and analytics focus rather than high-throughput data storage and retrieval. - Answer D: Amazon S3 is a highly durable and scalable object storage service, but querying S3 objects directly from EC2 instances through an Elastic Load Balancer may not meet the low-latency requirement of less than two seconds per request, especially under high load.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5 million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of the current weather forecast application expect responses to queries to be returned in less than two seconds for each request. Which design meets the required request rate and response time?
A
Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.
B
Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the set cache-control timeout for 15 minutes in the CloudFront distribution.
C
Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes.
D
Store forecast locations in an Amazon S3 as individual objects. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 object. Set the cache-control timeout for 15 minutes in the 3CloudFront distribution.
No comments yet.