Ultimate access to all questions.
In a fast-growing connected car technology startup, various teams are running non-production workloads on a single GKE cluster, each in different namespaces. The ML team is working on advanced ML/AI projects and requires access to Nvidia Tesla P100 GPUs for model training. How can you meet their request with minimal effort and cost?