
Answer-first summary for fast verification
Answer: Add a new node pool with GPU support to the current GKE cluster and instruct the ML team to use the `cloud.google.com/gke-accelerator: nvidia-tesla-p100` nodeSelector in their pod specifications.
Option A is not optimal because creating a separate cluster increases management overhead and costs. Option B is disruptive and unnecessary, as it involves recreating all nodes. Option D is insufficient because it only provides guidelines without ensuring GPU access. Option C is correct as it efficiently adds GPU capabilities to the existing cluster with minimal effort and cost, allowing the ML team to specify GPU usage in their pod configurations. This approach leverages the existing GKE infrastructure and only introduces additional resources where needed.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
In a fast-growing connected car technology startup, various teams are running non-production workloads on a single GKE cluster, each in different namespaces. The ML team is working on advanced ML/AI projects and requires access to Nvidia Tesla P100 GPUs for model training. How can you meet their request with minimal effort and cost?
A
Create a custom Kubernetes cluster on Compute Engine with GPU-enabled nodes and direct the ML team to use this cluster.
B
Enable GPUs and recreate all nodes in the existing GKE cluster to accommodate the ML team's needs.
C
Add a new node pool with GPU support to the current GKE cluster and instruct the ML team to use the cloud.google.com/gke-accelerator: nvidia-tesla-p100 nodeSelector in their pod specifications.
D
Provide the ML team with guidelines to include the accelerator: gpu annotation in their pod specifications.
No comments yet.