
Answer-first summary for fast verification
Answer: The cluster's current resource usage is too high, leaving insufficient resources to schedule the pending Pod.
Option B is correct because a Pod stuck in the `Pending` state often indicates that the cluster lacks the necessary resources (CPU, memory, or both) to schedule the Pod. This scenario is common when the cluster is under heavy load or not adequately scaled. Option A is incorrect because identical Pods in a deployment would have the same resource requests, making it unlikely for only one to be affected. Option C is incorrect because permission issues would affect all Pods in the deployment, not just one. Option D is incorrect because preemption would affect all Pods on the preempted node, not just one. For more details, refer to [ManagedKube](https://managedkube.com/kubernetes/k8sbot/troubleshooting/pending/pod/2019/02/22/pending-pod.html) and [Kubernetes documentation](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#:~:text=If%20a%20Pod%20is%20stuck,be%20scheduled%20onto%20a%20node.&text=You%20don‘t%20have%20enough,new%20nodes%20to%20your%20cluster).
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
As a lead DevOps engineer at a cloud-based startup, you've deployed an application on a Google Kubernetes Engine cluster featuring a single preemptible node pool. Upon executing the command kubectl get pods -l app=helloworld, you observe that one of the two replicas in your deployment is in a Running status while the other remains Pending. What could be the underlying cause of this issue?
A
The pending Pod is requesting more resources than what's available on any single node in the cluster.
B
The cluster's current resource usage is too high, leaving insufficient resources to schedule the pending Pod.
C
The node pool's service account lacks the necessary permissions to pull the container images required by the pending Pod.
D
The pending Pod was scheduled on a preemptible node that has been preempted and is awaiting rescheduling on a new node.
No comments yet.