
Answer-first summary for fast verification
Answer: Increase the regularization parameter to decrease model complexity.
## Detailed Explanation The scenario described in the question is a classic case of **overfitting**. Overfitting occurs when a machine learning model learns the training data too well, including its noise and random fluctuations, rather than the underlying patterns. This results in excellent performance on the training data but poor generalization to new, unseen data. Let's analyze each option: **Option A: Decrease the regularization parameter to increase model complexity.** - This would make the problem worse. Decreasing regularization reduces constraints on the model, allowing it to become even more complex and fit the training data more closely, thereby increasing overfitting. **Option B: Increase the regularization parameter to decrease model complexity.** - **This is the correct solution.** Regularization techniques (like L1/L2 regularization) add a penalty term to the model's loss function based on the magnitude of the coefficients. Increasing the regularization parameter strengthens this penalty, which discourages the model from becoming overly complex. This helps the model focus on the most important patterns in the data rather than memorizing noise, thereby reducing overfitting and improving generalization to new data. **Option C: Add more features to the input data.** - This could potentially introduce more noise or irrelevant features, which might exacerbate overfitting if the model starts learning from these additional, possibly irrelevant, patterns. While feature engineering is important, adding features without proper selection or regularization is not a direct solution to overfitting and could make the problem worse. **Option D: Train the model for more epochs.** - Training for more epochs typically leads to the model fitting the training data even more closely, which increases the risk of overfitting. In fact, early stopping (stopping training before the model overfits) is a common technique to prevent this issue. **Conclusion:** The optimal solution is to **increase the regularization parameter**, as it directly addresses the root cause of overfitting by reducing model complexity and encouraging better generalization. This aligns with AWS best practices for building robust machine learning models that perform well in production environments.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A company's machine learning model for predicting customer churn achieves high accuracy on the training data but performs poorly on new, unseen data.
What solution will address this problem?
A
Decrease the regularization parameter to increase model complexity.
B
Increase the regularization parameter to decrease model complexity.
C
Add more features to the input data.
D
Train the model for more epochs.