
Answer-first summary for fast verification
Answer: Integrated gradients, Sampled Shapley, XRAI
Vertex Explainable AI provides insights into model predictions by employing three primary methods for feature attribution: - **Integrated gradients**: This method calculates the gradient of the model's output with respect to its inputs at various points along a straight path from a baseline to the input, integrating these gradients to determine feature importance. - **Sampled Shapley**: Based on cooperative game theory, this method evaluates the contribution of each feature by considering various permutations and combinations of features. - **XRAI**: An extension of integrated gradients, XRAI optimizes the attribution process by identifying regions of an image that contribute most to the prediction, making it particularly useful for image models. The 'Maximum Likelihood' option is incorrect as it is a statistical method for estimating the parameters of a probability distribution, not a feature attribution technique. 'Decision Trees', while interpretable, are not a method employed by Vertex Explainable AI for feature attribution in complex models. For further reading, consult the Vertex AI documentation on Explainable AI and the AI Explainability Whitepaper provided by Google Cloud.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You are a Machine Learning Engineer working on a project that involves deploying a complex model on Vertex AI, Google Cloud's managed ML platform. Your team is particularly interested in understanding the model's decision-making process by identifying the most influential features and their impact. Vertex Explainable AI offers several methods for feature attribution to achieve this. Considering the need for accuracy, scalability, and compliance with industry standards, which three methods does Vertex AI employ for feature attributions? Choose the three correct options.
A
Maximum Likelihood
B
Integrated gradients
C
Sampled Shapley
D
XRAI
E
Decision Trees