
Answer-first summary for fast verification
Answer: All of the above strategies are part of Google's recommended best practices for optimizing costs in ML projects on GCP., Implement automatic shutdown policies for idle Compute Engine instances to reduce costs associated with unused resources.
Options A through D are all valid strategies recommended by Google for optimizing costs in ML projects on GCP. Implementing automatic shutdown policies for idle instances (Option A) and using Preemptible VMs for interruptible workloads (Option B) are direct cost-saving measures. Monitoring GPU usage (Option C) and treating notebooks as temporary resources (Option D) also contribute to cost efficiency by ensuring resources are used judiciously. Since the question asks for the two most effective strategies, Options A and B are highlighted for their direct impact on reducing costs. However, Option E is the most comprehensive answer as it includes all the mentioned practices, making it the best choice when selecting a single option.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
As a consultant to the CIO of a large financial firm, you're tasked with advising on the optimization of machine learning (ML) projects on Google Cloud Platform (GCP) to achieve cost efficiency without compromising on performance. The firm is particularly concerned about unnecessary expenditures and seeks to implement Google's recommended best practices for ML projects. The CIO asks, 'Given our need to balance cost and performance in our ML initiatives, what strategies should we adopt to ensure we're following Google's best practices for cost optimization?' Please select the two most effective strategies from the options provided. (Choose two)
A
Implement automatic shutdown policies for idle Compute Engine instances to reduce costs associated with unused resources.
B
Use Preemptible VMs for batch processing and other non-critical ML workloads that can tolerate interruptions, as they offer significant cost savings over regular instances.
C
Deploy monitoring tools to track and optimize GPU utilization across ML workloads, given the premium cost of GPUs, to ensure efficient use of these resources.
D
Adopt a policy of treating AI Platform Notebooks as ephemeral resources, deleting them when not in active use to avoid incurring costs for reserved but unused capacity.
E
All of the above strategies are part of Google's recommended best practices for optimizing costs in ML projects on GCP.