
Ultimate access to all questions.
You've developed a custom ML model using scikit-learn for a project that processes large datasets. The training time has become a bottleneck, affecting the project timeline. To optimize the training process, you're considering migrating your model to Vertex AI Training. The project has constraints including a tight budget and the need for scalability to handle increasing data volumes. Given these constraints, what initial approach should you consider to effectively reduce the training time? (Choose one correct option)
A
Migrate your model to TensorFlow, and train it using Vertex AI Training, despite the additional complexity and potential cost increase.
B
Train your model in a distributed mode using multiple Compute Engine VMs, considering the setup and management overhead.
C
Train your model using Vertex AI Training with GPUs, leveraging the hardware acceleration for scikit-learn models.
D
Train your model with DLVM images on Vertex AI, and ensure that your code utilizes NumPy and SciPy internal methods whenever possible, focusing on code optimization.