
Answer-first summary for fast verification
Answer: L1 or Lasso Regression
L1 or Lasso Regression is the correct choice because it applies an absolute value of magnitude penalty, effectively driving the parameters (or coefficients) of the least useful features toward zero. In contrast, L2 or Ridge Regression uses a squared magnitude penalty that penalizes large parameters without necessarily driving them to zero. Dropout is a different form of regularization that randomly ignores subsets of features during training steps, and Backpropagation is an algorithm for error distribution in neural networks, not a regularization technique. For more insights, visit [Google Cloud's guide on preventing overfitting](https://cloud.google.com/bigquery-ml/docs/preventing-overfitting) and [Machine Learning Mastery on overfitting and underfitting](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/).
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
When developing a deep learning model with a large number of features and uncertainty about their importance, which regularization technique would effectively drive the parameters of the least important features toward zero?
A
Dropout
B
L2 or Ridge Regression
C
Backpropagation
D
L1 or Lasso Regression
No comments yet.