
Answer-first summary for fast verification
Answer: Use the Kubernetes Metrics Server to activate horizontal pod autoscaling., Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
## Explanation To meet the requirements of scaling Amazon EKS according to workload with the **least operational overhead**, the correct combination is: **B. Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.** **C. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.** ### Why this combination works: 1. **Horizontal Pod Autoscaling (HPA)** with Kubernetes Metrics Server: - HPA automatically scales the number of pods in a deployment based on observed CPU utilization or custom metrics - Kubernetes Metrics Server collects resource metrics from Kubelets and exposes them via the Metrics API - This handles scaling at the application/pod level based on actual workload 2. **Kubernetes Cluster Autoscaler:** - Automatically adjusts the size of the EKS cluster by adding or removing nodes - Works in conjunction with HPA - when pods can't be scheduled due to insufficient resources, Cluster Autoscaler adds nodes - When nodes are underutilized, it removes nodes to save costs - This handles scaling at the infrastructure/node level ### Why other options are incorrect: **A. Use an AWS Lambda function to resize the EKS cluster:** - This would require custom code, monitoring, and maintenance - Higher operational overhead compared to native Kubernetes solutions - Not the "least operational overhead" approach **D. Use Amazon API Gateway and connect it to Amazon EKS:** - API Gateway is for creating, publishing, and managing APIs - Not related to autoscaling EKS clusters - This would not help with scaling based on workload **E. Use AWS App Mesh to observe network activity:** - App Mesh is a service mesh for monitoring and controlling microservices - Provides observability for network traffic but doesn't handle autoscaling - Not designed for automatic scaling of EKS clusters ### Key Benefits of the Correct Solution: - **Native Kubernetes integration** - uses standard Kubernetes components - **Automated scaling** - both at pod and node levels without manual intervention - **Cost optimization** - scales down during low usage periods - **Minimal operational overhead** - managed by Kubernetes and AWS - **Seamless integration** - HPA and Cluster Autoscaler work together automatically This combination provides a complete autoscaling solution where HPA scales pods based on workload, and Cluster Autoscaler adjusts the node count to accommodate the pods, all with minimal operational effort.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS). The company's workload is not consistent throughout the day. The company wants Amazon EKS to scale in and out according to the workload.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
A
Use an AWS Lambda function to resize the EKS cluster.
B
Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.
C
Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
D
Use Amazon API Gateway and connect it to Amazon EKS.
E
Use AWS App Mesh to observe network activity.