
Ultimate access to all questions.
Explain the concept of model interpretability in AutoML. Discuss why it is important and how AutoML tools facilitate the interpretation of complex models. Provide examples of interpretability techniques used in AutoML.
A
Feature importance, partial dependence plots, and SHAP values.
B
Confusion matrix, ROC curve, and precision-recall curve.
C
LIME, anchor explanations, and decision trees.
D
Gradient boosting, random forests, and neural networks.