
AWS Certified DevOps Engineer - Professional
Get started today
Ultimate access to all questions.
A space exploration company collects telemetry data from various satellites. This data, in the form of small packets, is received via Amazon API Gateway and immediately queued in an Amazon Simple Queue Service (SQS) standard queue. A specialized application monitors this queue and is responsible for converting the incoming data into a standardized format. However, due to the inherent inconsistencies in the satellite data, the application sometimes fails to process certain messages, causing them to remain in the SQS queue. The DevOps engineer is tasked with creating a solution that not only preserves these unprocessed messages but also ensures they are accessible for review and potential reprocessing by the scientific team. What should be the approach to address this issue?
A space exploration company collects telemetry data from various satellites. This data, in the form of small packets, is received via Amazon API Gateway and immediately queued in an Amazon Simple Queue Service (SQS) standard queue. A specialized application monitors this queue and is responsible for converting the incoming data into a standardized format. However, due to the inherent inconsistencies in the satellite data, the application sometimes fails to process certain messages, causing them to remain in the SQS queue. The DevOps engineer is tasked with creating a solution that not only preserves these unprocessed messages but also ensures they are accessible for review and potential reprocessing by the scientific team. What should be the approach to address this issue?
Explanation:
The correct answer is C. This is a classic use case for a Dead Letter Queue (DLQ) within Amazon SQS. By creating an SQS dead-letter queue and modifying the existing queue with a redrive policy, you can ensure that any messages that cannot be processed (i.e., those which the application fails to transform) are moved to the DLQ after one receive attempt. This allows the scientists to review the problematic data in a separate queue and reprocess it as needed. The solution is robust and aligns well with AWS best practices for handling message processing failures.