
Ultimate access to all questions.
You recently developed a deep learning model for a classification task using a large dataset consisting of millions of samples. After training the model for several epochs, you observed that both the training and validation losses remained almost constant and did not decrease. To identify the problem and improve your model, what should you do first?
A
Verify that your model can obtain a low loss on a small subset of the dataset
B
Add handcrafted features to inject your domain knowledge into the model
C
Use the Vertex AI hyperparameter tuning service to identify a better learning rate
D
Use hardware accelerators and train your model for more epochs