
Ultimate access to all questions.
You recently deployed an ML model in a production environment. After monitoring its performance for three months, you notice that the model is underperforming on certain subgroups, leading to biased predictions. You suspect that this issue arises due to class imbalances in the training data, but collecting additional data is not an option. Given these constraints, what actions should you take to address the model's inequitable performance? (Choose two.)
A
Remove training examples of high-performing subgroups, and retrain the model.
B
Add an additional objective to penalize the model more for errors made on the minority class, and retrain the model.
C
Remove the features that have the highest correlations with the majority class.
D
Upsample or reweight your existing training data, and retrain the model.
E
Redeploy the model, and provide a label explaining the model's behavior to users.