
Ultimate access to all questions.
In a subscription-based company, a model combining trees and neural networks predicts customer churn, the likelihood a customer won't renew their yearly subscription. While the average churn prediction is 15%, one specific customer has a 70% predicted churn rate. This customer, with 30% product usage, from New York City, and a customer since 1997, stands out. The company is particularly interested in understanding the high churn prediction for this customer to take targeted retention actions. Given the need for transparency and actionable insights, how can Vertex Explainable AI best explain this discrepancy? (Choose two correct options if E is available, otherwise choose one.)
A
Train regional surrogate models to explain individual predictions, focusing on geographical and usage patterns.
B
Calculate the effect of each feature as the weight of the feature multiplied by the feature value, providing a linear approximation of feature importance.
C
Set up integrated gradients explanations on Vertex Explainable AI to understand the contribution of each feature to the prediction by integrating the gradients along the path from a baseline to the input.
D
Set up sampled Shapley explanations on Vertex Explainable AI to fairly distribute the contribution of each feature to the prediction, considering all possible combinations of features.
E
Combine both integrated gradients and sampled Shapley explanations to leverage the strengths of both methods for a comprehensive understanding of the prediction.