
Ultimate access to all questions.
Your organization is using a Google Kubernetes Engine (GKE) cluster to support various non-production workloads from different teams. The Machine Learning (ML) team has a requirement to utilize Nvidia Tesla P100 GPUs for training their models. Considering the need to minimize both effort and cost, what steps should you take to meet their requirements?
A
Ask your ML team to add the ג€accelerator: gpu ג €annotation to their pod specification.
B
Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
C
Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.
D
Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke-accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.