
Answer-first summary for fast verification
Answer: Interpretability
The question asks for the Responsible AI dashboard component that provides explanations for model behavior using feature importance measures. Based on the community discussion and Microsoft documentation, the Model Interpretability component (option D) is specifically designed for this purpose. It helps understand how models make predictions by providing feature importance insights at both aggregate and individual levels. While one comment suggested 'Model interpretability' as the full name, the official documentation and consensus (67% of community votes for D) confirm that 'Interpretability' is the correct component name in the dashboard context. The other options serve different purposes: Counterfactual what-if (A) tests hypothetical scenarios, Casual inference (B) examines cause-effect relationships, and Fairness assessment (C) evaluates model bias - none of these directly provide feature importance explanations.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are managing an Azure Machine Learning workspace and need to provide model behavior explanations using feature importance measures. To configure a Responsible AI dashboard for this purpose, which component should you use?
A
Counterfactual what-if
B
Casual inference
C
Fairness assessment
D
Interpretability
No comments yet.