
Answer-first summary for fast verification
Answer: Set up sampled Shapley explanations on Vertex Explainable AI to fairly distribute the contribution of each feature to the prediction, considering all possible combinations of features., Set up integrated gradients explanations on Vertex Explainable AI to understand the contribution of each feature to the prediction by integrating the gradients along the path from a baseline to the input.
Vertex Explainable AI offers multiple methods to interpret machine learning model predictions. Sampled Shapley explanations are particularly effective for understanding individual predictions by fairly attributing the contribution of each feature, considering all possible combinations. Integrated gradients provide insights by integrating the gradients along the path from a baseline to the input, offering another perspective on feature contributions. Combining these methods can provide a more comprehensive understanding, especially in complex scenarios like the given customer churn prediction discrepancy.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
In a subscription-based company, a model combining trees and neural networks predicts customer churn, the likelihood a customer won't renew their yearly subscription. While the average churn prediction is 15%, one specific customer has a 70% predicted churn rate. This customer, with 30% product usage, from New York City, and a customer since 1997, stands out. The company is particularly interested in understanding the high churn prediction for this customer to take targeted retention actions. Given the need for transparency and actionable insights, how can Vertex Explainable AI best explain this discrepancy? (Choose two correct options if E is available, otherwise choose one.)
A
Train regional surrogate models to explain individual predictions, focusing on geographical and usage patterns.
B
Calculate the effect of each feature as the weight of the feature multiplied by the feature value, providing a linear approximation of feature importance.
C
Set up integrated gradients explanations on Vertex Explainable AI to understand the contribution of each feature to the prediction by integrating the gradients along the path from a baseline to the input.
D
Set up sampled Shapley explanations on Vertex Explainable AI to fairly distribute the contribution of each feature to the prediction, considering all possible combinations of features.
E
Combine both integrated gradients and sampled Shapley explanations to leverage the strengths of both methods for a comprehensive understanding of the prediction.