Ultimate access to all questions.
You've noticed that a machine learning model's performance in production doesn't match its validation test results, leading you to suspect overfitting. Which training technique can help mitigate this risk?
Explanation:
L2 regularization is effective in preventing overfitting by penalizing large weights in the model. Gradient descent is a method for optimizing the model's parameters, not specifically for overfitting. Backpropagation is used for adjusting weights in neural networks. The term 'label engineering' is a distractor; the correct term is 'feature engineering', which involves creating additional features for the model. For more details, visit Google's documentation on preventing overfitting.