
Answer-first summary for fast verification
Answer: A Deep Learning VM with an n1-standard-2 machine type and 1 GPU, featuring all required libraries pre-installed and optimized for deep learning tasks.
Choosing a Deep Learning VM with an n1-standard-2 machine and 1 GPU, with all libraries pre-installed, is the optimal choice for your scenario. Here's why: - **Pre-installed libraries**: Saves time and effort by eliminating the need for manual setup, allowing you to focus on model development. - **GPU acceleration**: Essential for significantly speeding up deep learning tasks, particularly beneficial for CNNs which are computationally intensive. - **Managed environment**: Simplifies both setup and ongoing maintenance, reducing the operational overhead. - **Cost-effectiveness**: Offers a practical balance between performance and expense, making it suitable for projects with budget constraints. Other options considered: - **8 GPUs on Compute Engine**: While offering high computational power, the manual setup required can be time-consuming and error-prone, delaying your project further. - **1 TPU on Compute Engine**: TPUs are specialized for machine learning workloads but may be overkill for your current needs, and they require manual configuration and code optimization. - **e2-highcpu-16 Deep Learning VM**: Provides strong CPU performance but lacks the GPU support critical for accelerating CNN training, which is a key requirement for your project. Opting for the Deep Learning VM with GPU support enables a swift setup and accelerates your model's development cycle, aligning with your time-to-market goals.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
Your team is developing a Convolutional Neural Network (CNN) from scratch for an image recognition project. Initial tests on your on-premises CPU-only setup showed promising accuracy, but the training process was prohibitively slow, delaying your project timeline. To accelerate model training and meet the aggressive time-to-market goals, you're evaluating Google Cloud VMs for their superior hardware capabilities. Your current codebase does not include manual device placement and is not wrapped in an Estimator model-level abstraction. Given these constraints, and considering the need for a balance between cost, setup time, and performance, which of the following environments should you choose for training your model? (Choose one correct option)
A
A VM on Compute Engine configured with 8 GPUs, requiring manual installation and configuration of all dependencies, including CUDA and cuDNN libraries.
B
A VM on Compute Engine equipped with 1 TPU, necessitating manual setup of all dependencies and specific TPU optimization in your code.
C
A Deep Learning VM with an e2-highcpu-16 machine type, offering all necessary deep learning libraries pre-installed but without GPU support.
D
A Deep Learning VM with an n1-standard-2 machine type and 1 GPU, featuring all required libraries pre-installed and optimized for deep learning tasks.