
Answer-first summary for fast verification
Answer: Use AutoML Tables with built-in explainability features, and use Shapley values for explainability.
The question emphasizes building a solution with transparent explanations for AI-driven banking decisions while requiring minimal operational overhead. Option C (AutoML Tables with built-in explainability features and Shapley values) is optimal because AutoML Tables automates model building and hyperparameter tuning, significantly reducing operational effort. Its built-in explainability using Shapley values provides mathematically sound feature attributions, ensuring transparency without custom code. This aligns with the community consensus (75% support for C, with upvoted comments highlighting its managed service benefits and scalability). Option B (Vertex Explainable AI) offers flexibility but may require more customization and overhead. Option A (LIT on App Engine) involves deployment complexity, and Option D (pre-trained models from TensorFlow Hub) lacks tailored explainability for banking data, making them less suitable for minimal overhead.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
As a bank's ML engineer building a solution to provide transparent explanations for AI-driven loan decisions (approvals, credit limits, interest rates) with minimal operational overhead, what is your recommended approach?
A
Deploy the Learning Interpretability Tool (LIT) on App Engine to provide explainability and visualization of the output.
B
Use Vertex Explainable AI to generate feature attributions, and use feature-based explanations for your models.
C
Use AutoML Tables with built-in explainability features, and use Shapley values for explainability.
D
Deploy pre-trained models from TensorFlow Hub to provide explainability using visualization tools.
No comments yet.