
Answer-first summary for fast verification
Answer: You have to add L1/L2 regularization, and make use of training data augmentation.
The question asks for the TRUE statement to minimize overfitting in a deep CNN for image classification. Overfitting occurs when a model learns the training data too well, including noise, and fails to generalize to new data. Option D is correct because: (1) L1/L2 regularization adds penalty terms to the loss function, discouraging complex weights and reducing overfitting; (2) Training data augmentation (e.g., rotations, flips) artificially increases dataset size and diversity, improving generalization. Community discussion strongly supports D (100% consensus, high upvotes), with explanations noting that regularization and data augmentation are standard techniques. Other options are less suitable: A and E add dense layers, increasing model complexity and potentially worsening overfitting; B and C suggest reducing training data, which exacerbates overfitting by limiting the model's ability to learn general patterns.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You are building a deep convolutional neural network (CNN) for image classification. You observe signs of overfitting in the model and want to ensure overfitting is minimized while the model converges to an optimal fit.
Which of the following statements is TRUE for achieving this goal?
A
You have to add an additional dense layer with 512 input units, and reduce the amount of training data.
B
You have to add L1/L2 regularization, and reduce the amount of training data.
C
You have to reduce the amount of training data and make use of training data augmentation.
D
You have to add L1/L2 regularization, and make use of training data augmentation.
E
You have to add an additional dense layer with 512 input units, and add L1/L2 regularization.