
Ultimate access to all questions.
Answer-first summary for fast verification
Answer: Use Amazon API Gateway to send the raw data to an Amazon Kinesis data stream. Configure Amazon Kinesis Data Firehose to use the data stream as a source to deliver the data to Amazon S3., Use AWS Glue to process the raw data in Amazon S3.
## Explanation **Correct Answers: E and A** **E: Use Amazon API Gateway to send the raw data to an Amazon Kinesis data stream. Configure Amazon Kinesis Data Firehose to use the data stream as a source to deliver the data to Amazon S3.** - This solution provides a highly scalable and serverless architecture for data ingestion. - API Gateway can handle millions of concurrent RESTful API calls from remote devices. - Kinesis Data Streams can ingest and buffer massive amounts of streaming data. - Kinesis Data Firehose automatically delivers the data to S3 with minimal operational overhead. - This eliminates the need to manage EC2 instances for data ingestion and transformation. **A: Use AWS Glue to process the raw data in Amazon S3.** - AWS Glue is a serverless ETL service that can process data stored in S3. - It can transform the raw data after it's delivered to S3 by Kinesis Data Firehose. - This minimizes operational overhead as AWS Glue is serverless and automatically scales. - The processing can be scheduled or triggered based on new data arrival in S3. **Why other options are incorrect:** **B: Use Amazon Route 53 to route traffic to different EC2 instances.** - Route 53 is a DNS service, not a solution for data processing scalability. - While it can help with load balancing, it doesn't address the core requirement of minimizing operational overhead. **C: Add more EC2 instances to accommodate the increasing amount of incoming data.** - This approach increases operational overhead (managing, scaling, patching EC2 instances). - It's not a "highly scalable solution that minimizes operational overhead" as required. **D: Send the raw data to Amazon Simple Queue Service (Amazon SQS). Use EC2 instances to process the data.** - While SQS can help with decoupling, this still requires managing EC2 instances for processing. - It doesn't minimize operational overhead compared to serverless solutions. **Key Architecture Benefits:** 1. **Serverless scalability**: API Gateway, Kinesis, and Firehose automatically scale to handle millions of devices. 2. **Minimal operational overhead**: No servers to manage, patch, or scale manually. 3. **Reliable data delivery**: Kinesis Data Firehose ensures data delivery to S3 with built-in error handling. 4. **Flexible processing**: AWS Glue provides serverless ETL capabilities on the data in S3.
Author: LeetQuiz Editorial Team
No comments yet.
A company collects data from thousands of remote devices by using a RESTful web services application that runs on an Amazon EC2 instance. The EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket. The number of remote devices will increase into the millions soon. The company needs a highly scalable solution that minimizes operational overhead.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A
Use AWS Glue to process the raw data in Amazon S3.
B
Use Amazon Route 53 to route traffic to different EC2 instances.
C
Add more EC2 instances to accommodate the increasing amount of incoming data.
D
Send the raw data to Amazon Simple Queue Service (Amazon SQS). Use EC2 instances to process the data.
E
Use Amazon API Gateway to send the raw data to an Amazon Kinesis data stream. Configure Amazon Kinesis Data Firehose to use the data stream as a source to deliver the data to Amazon S3.