
Answer-first summary for fast verification
Answer: Train your model using Vertex AI Training with GPUs, leveraging the hardware acceleration for scikit-learn models.
Given the project's constraints of a tight budget and the need for scalability, the most effective initial step to reduce training time for a scikit-learn model on Vertex AI Training is to utilize GPUs. scikit-learn can significantly benefit from GPU acceleration, especially with large datasets, leading to a dramatic reduction in training time without the need for complex migrations or distributed systems setup. This approach aligns with the project's budget and scalability requirements, providing a straightforward solution to the training time bottleneck.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You've developed a custom ML model using scikit-learn for a project that processes large datasets. The training time has become a bottleneck, affecting the project timeline. To optimize the training process, you're considering migrating your model to Vertex AI Training. The project has constraints including a tight budget and the need for scalability to handle increasing data volumes. Given these constraints, what initial approach should you consider to effectively reduce the training time? (Choose one correct option)
A
Migrate your model to TensorFlow, and train it using Vertex AI Training, despite the additional complexity and potential cost increase.
B
Train your model in a distributed mode using multiple Compute Engine VMs, considering the setup and management overhead.
C
Train your model using Vertex AI Training with GPUs, leveraging the hardware acceleration for scikit-learn models.
D
Train your model with DLVM images on Vertex AI, and ensure that your code utilizes NumPy and SciPy internal methods whenever possible, focusing on code optimization.
No comments yet.