
Answer-first summary for fast verification
Answer: Introduce a node pool featuring preemptible VMs equipped with GPUs.
**Correct Option: C** **Cost-Effective GPU Usage:** Preemptible VMs are significantly cheaper, making them an ideal choice for long-running, non-restartable jobs that can tolerate interruptions. **GPU Utilization:** Attaching GPUs to preemptible VMs ensures your machine learning pipelines have the necessary computational power for efficient image processing model training. **Flexible Scaling:** While autoscaling adapts to workload fluctuations, preemptible VMs offer a more budget-friendly solution for consistent, long-running tasks. **Why other options are less suitable:** - **Option A (Node Auto-Provisioning):** Automatically adjusts node count based on demand but doesn't specifically cater to the need for GPU resources in long-running jobs. - **Option B (VerticalPodAutoscaler):** Optimizes pod resource requests and limits but doesn't offer a cost-effective GPU solution. - **Option D (Autoscaling Node Pool with GPUs):** Provides GPU resources but may not be as cost-efficient as preemptible VMs for non-restartable, long-running jobs.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
Your data science team utilizes Google Kubernetes Engine (GKE) for executing their machine learning pipelines, primarily focusing on training image processing models. Certain long-running, non-restartable jobs within these pipelines necessitate GPU usage. What is the most cost-effective solution to meet this requirement?
A
Implement the GKE cluster’s node auto-provisioning feature.
B
Apply a VerticalPodAutoscaler to the workloads in question.
C
Introduce a node pool featuring preemptible VMs equipped with GPUs.
D
Establish a node pool with GPU-enabled instances and activate autoscaling, setting a minimum size of 1.