
Answer-first summary for fast verification
Answer: Run the kubectl logs POD_NAME command where the POD_NAME parameter is the name of the problematic Pod. Analyze the logs of the Pod from previous runs to determine the root cause of failed start attempts of the Pod.
When a Pod enters the CrashLoopBackOff state, it indicates that the container within the Pod starts but repeatedly crashes. The first step in troubleshooting is to inspect the logs of the container to identify why it failed. The `kubectl logs POD_NAME` command retrieves logs from the most recent termination of the container, which is critical for diagnosing the crash. Option C correctly suggests this approach. Option A is incorrect because attempting to exec into a crashing Pod is often not feasible, as the container may not be running. Option B focuses on IAM permissions for Artifact Registry access, which would cause ImagePullBackOff (not CrashLoopBackOff) if the image couldn't be pulled. Option D involves network egress issues, which are less likely to cause immediate container crashes compared to application errors visible in logs.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You are deploying a containerized microservices application to GKE using container images pushed to Artifact Registry. After deployment, the services exhibit unexpected behavior. Upon running kubectl get pods, you find one Pod in a CrashLoopBackOff state. What steps should you take to troubleshoot this Pod?
A
Connect to the problematic Pod by running the kubectl exec -it POD_NAME - /bin/bash command where the POD_NAME parameter is the name of the problematic Pod. Inspect the logs in the /var/log/messages folder to determine the root cause.
B
Execute the gcloud projects get-iam-policy PROJECT_ID command where the PROJECT_ID parameter is the name of the project where your Artifact Registry resides. Inspect the IAM bindings of the node pool s service account. Validate if the service account has the roles/artifactregistry.reader role.
C
Run the kubectl logs POD_NAME command where the POD_NAME parameter is the name of the problematic Pod. Analyze the logs of the Pod from previous runs to determine the root cause of failed start attempts of the Pod.
D
In the Google Cloud console, navigate to Cloud Logging in the project of the cluster’s VPC. Enter a filter to show denied egress traffic to the Private Google Access CIDR range. Validate if egress traffic is denied from your GKE cluster to the Private Google Access CIDR range.