
Answer-first summary for fast verification
Answer: A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a preemptible v3-8 Tensor Processing Unit (TPU), A combination of a Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a preemptible v3-8 Tensor Processing Unit (TPU) and a Cloud Storage bucket for frequent checkpointing
Option D is correct because it offers a cost-effective solution for long-running models requiring frequent checkpointing by utilizing a preemptible v3-8 TPU, which is more economical than GPU-based training. Option E is also correct as it combines the cost-effectiveness of a preemptible TPU with the scalability and reliability of Cloud Storage for checkpointing, ensuring efficient model training without unnecessary expenses and providing a robust solution for data storage and retrieval during the training process.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A financial institution is developing a TensorFlow model to predict the impact of consumer spending on global inflation. The dataset is large and complex, requiring extensive training with frequent checkpointing across various hardware types. The institution aims to minimize costs while ensuring the model's training is efficient and scalable. Given these requirements, which of the following hardware options should be selected? (Choose two options if E is available)
A
A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with one NVIDIA P100 GPU
B
A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a non-preemptible v3-8 Tensor Processing Unit (TPU)
C
A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with 4 NVIDIA P100 GPUs
D
A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a preemptible v3-8 Tensor Processing Unit (TPU)
E
A combination of a Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a preemptible v3-8 Tensor Processing Unit (TPU) and a Cloud Storage bucket for frequent checkpointing
No comments yet.