
Answer-first summary for fast verification
Answer: Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
## Explanation **Correct Answer: B** **Why Option B is correct:** 1. **SQS Queue for Job Distribution**: Using Amazon SQS as a destination for jobs decouples the primary server from compute nodes, improving resiliency. If the primary server fails, jobs remain in the queue and can be processed later. 2. **Auto Scaling Based on Queue Size**: This is the most effective way to handle variable workloads. When the queue size grows (more jobs waiting), Auto Scaling can add more EC2 instances. When the queue size shrinks, it can remove instances, optimizing costs and scalability. 3. **Resiliency**: The architecture doesn't have a single point of failure. Even if the primary server fails, jobs are preserved in SQS. 4. **Scalability**: Auto Scaling based on queue metrics allows the system to automatically scale up/down based on actual workload demand. **Why Option A is incorrect:** - Scheduled scaling is not responsive to actual workload variations. It scales based on predetermined schedules, not real-time job demand. **Why Option C is incorrect:** - AWS CloudTrail is for logging API calls, not for job queuing. It's completely inappropriate as a job destination. - Scaling based on primary server load doesn't address compute node scalability needs. **Why Option D is incorrect:** - Amazon EventBridge is for event routing, not for durable job queuing. While it can trigger functions, it's not designed as a job queue for compute-intensive workloads. - Scaling based on compute node load is reactive rather than proactive based on job backlog. **Key AWS Services Used:** - **Amazon SQS**: Provides reliable, scalable message queuing service - **EC2 Auto Scaling**: Automatically adjusts capacity based on metrics - **CloudWatch Metrics**: Monitors SQS queue size for scaling decisions This architecture provides maximum resiliency (no single point of failure) and scalability (automatic scaling based on actual workload).
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability. How should a solutions architect design the architecture to meet these requirements?
A
Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
B
Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
C
Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
D
Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.