
Answer-first summary for fast verification
Answer: Use the interpretability package to generate an explainer for the model.
The question requires determining the influence of each feature on the model's predictions to ensure compliance with regulations prohibiting decisions based on location. Option D (Use the interpretability package to generate an explainer for the model) is correct because Azure ML's interpretability package (e.g., azureml-interpret) provides tools like SHAP (Shapley Additive Explanations) values, which quantify each feature's contribution to predictions. This directly addresses the need to identify if location or other features are inappropriately influencing outcomes. The community discussion strongly supports this, with high upvotes (e.g., 7 upvotes) and comments emphasizing that explainers reveal feature importance, such as Shapley values, to detect regulatory violations. Other options are less suitable: A (data drift monitoring) focuses on data distribution changes over time, not feature influence; B (confusion matrix) assesses overall model performance, not per-feature contributions; C (Hyperdrive) is for hyperparameter tuning, not interpretability; E (adding tags) is for metadata management and does not analyze feature impact.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You have trained and registered a machine learning model in Azure ML that predicts loan repayment likelihood. To ensure the model is compliant with government regulations and does not make decisions based on prohibited features like an applicant's location, you need to identify the contribution of each data feature to the model's predictions.
What should you do to determine the influence of each feature?
A
Enable data drift monitoring for the model and its training dataset.
B
Score the model against some test data with known label values and use the results to calculate a confusion matrix.
C
Use the Hyperdrive library to test the model with multiple hyperparameter values.
D
Use the interpretability package to generate an explainer for the model.
E
Add tags to the model registration indicating the names of the features in the training dataset.
No comments yet.