
Ultimate access to all questions.
In a fast-growing connected car technology startup, various teams are running non-production workloads on a single GKE cluster, each in different namespaces. The ML team is working on advanced ML/AI projects and requires access to Nvidia Tesla P100 GPUs for model training. How can you meet their request with minimal effort and cost?
A
Create a custom Kubernetes cluster on Compute Engine with GPU-enabled nodes and direct the ML team to use this cluster.
B
Enable GPUs and recreate all nodes in the existing GKE cluster to accommodate the ML team's needs.
C
Add a new node pool with GPU support to the current GKE cluster and instruct the ML team to use the cloud.google.com/gke-accelerator: nvidia-tesla-p100 nodeSelector in their pod specifications.
D
Provide the ML team with guidelines to include the accelerator: gpu annotation in their pod specifications.