
Answer-first summary for fast verification
Answer: Use AWS Batch on Amazon EC2.
## Explanation **Correct Answer: D - Use AWS Batch on Amazon EC2** AWS Batch is specifically designed for running batch computing workloads on AWS. Here's why this is the best choice: ### Key Factors: 1. **CPU-intensive workload**: The application requires 64 vCPU and 512 GiB memory, which is a substantial compute requirement. 2. **Batch job nature**: Runs every hour for 15 minutes - this is a classic batch processing pattern. 3. **Least operational overhead**: AWS Batch manages the provisioning, scheduling, and scaling of compute resources. ### Why AWS Batch is optimal: - **Automatic resource provisioning**: AWS Batch automatically provisions EC2 instances with the required specifications (64 vCPU, 512 GiB memory) - **Job scheduling**: Handles job queuing and scheduling automatically - **Cost optimization**: Only runs instances when jobs need to execute, then terminates them - **Minimal management**: No need to manage clusters, scaling policies, or instance lifecycle - **Integration**: Works seamlessly with other AWS services ### Why other options are less optimal: **A. AWS Lambda**: - Lambda has limitations (15-minute timeout, memory up to 10GB, CPU scaling based on memory) - Not suitable for 64 vCPU workloads - Would require complex parallelization and coordination **B. Amazon ECS with Fargate**: - Fargate has resource limits (currently up to 16 vCPU and 120GB memory per task) - Would require multiple tasks to achieve 64 vCPU - More operational overhead for scheduling and coordination **C. Amazon Lightsail with Auto Scaling**: - Lightsail is designed for simpler workloads - Auto Scaling adds operational complexity - Not optimized for batch processing **D. AWS Batch on EC2**: - ✓ Specifically designed for batch workloads - ✓ Can provision large EC2 instances (up to 448 vCPU, 24TB memory) - ✓ Minimal operational overhead - AWS manages scheduling and provisioning - ✓ Cost-effective - only pay for compute when jobs run ### Additional Considerations: - AWS Batch can use Spot Instances for additional cost savings - Job dependencies and retries are built-in - Monitoring and logging integration with CloudWatch - Supports Docker containers for application packaging This solution provides the required compute power while minimizing operational overhead, making it the most appropriate choice for this CPU-intensive batch workload.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company is migrating an old application to AWS. The application runs a batch job every hour and is CPU intensive. The batch job takes 15 minutes on average with an on-premises server. The server has 64 virtual CPU (vCPU) and 512 GiB of memory.
Which solution will run the batch job within 15 minutes with the LEAST operational overhead?
A
Use AWS Lambda with functional scaling.
B
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
C
Use Amazon Lightsail with AWS Auto Scaling.
D
Use AWS Batch on Amazon EC2.