
Answer-first summary for fast verification
Answer: Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate, enable auto scaling behind an Application Load Balancer, create additional read replicas for the database instance, establish an Amazon Managed Streaming for Apache Kafka cluster for the application services, and use Amazon S3 with an Amazon CloudFront distribution for static content storage and delivery.
Option D is the correct answer. Deploying the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate allows for a serverless-like experience, reducing the operational overhead by eliminating the need to manage EC2 instances directly. Fargate handles the server provisioning and scaling automatically. Additionally, setting up auto scaling behind an Application Load Balancer ensures that the infrastructure can handle the increased load during the product release. Creating additional read replicas for the PostgreSQL database instance helps distribute read traffic and improve database performance. Using Amazon Managed Streaming for Apache Kafka ensures a managed, highly available message streaming platform. Finally, storing static content in Amazon S3 behind an Amazon CloudFront distribution ensures efficient content delivery with low latency and high transfer speeds.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company has containerized its application services and deployed them on multiple Amazon EC2 instances with public IPs. They have also deployed an Apache Kafka cluster on these EC2 instances and migrated their PostgreSQL database to Amazon RDS for PostgreSQL. Anticipating a significant surge in orders upon the release of a new product version, the company seeks architectural changes to minimize operational overhead and accommodate this increase. What modifications should be made to the current setup to achieve these goals?
A
Implement an EC2 Auto Scaling group behind an Application Load Balancer, add more read replicas for the database instance, integrate Amazon Kinesis data streams for the application services, and use Amazon S3 for direct storage and serving of static content.
B
Establish an EC2 Auto Scaling group behind an Application Load Balancer, configure the database instance for Multi-AZ deployment with storage auto scaling, utilize Amazon Kinesis data streams for the application services, and employ Amazon S3 for direct storage and serving of static content.
C
Set up a Kubernetes cluster on the EC2 instances behind an Application Load Balancer, deploy the database instance in Multi-AZ mode with storage auto scaling, create an Amazon Managed Streaming for Apache Kafka cluster for the application services, and store static content in Amazon S3 with an Amazon CloudFront distribution.
D
Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate, enable auto scaling behind an Application Load Balancer, create additional read replicas for the database instance, establish an Amazon Managed Streaming for Apache Kafka cluster for the application services, and use Amazon S3 with an Amazon CloudFront distribution for static content storage and delivery.