
Answer-first summary for fast verification
Answer: Use a boosted decision tree-based model architecture, and use SHAP values for interpretability.
Option C is the correct choice because boosted decision tree models (like XGBoost, LightGBM) provide excellent accuracy on structured data while maintaining high interpretability through feature importance scores and tree-based structures. SHAP (SHapley Additive exPlanations) offers robust, theoretically sound interpretability that is widely accepted in regulatory environments, providing both global and local explanations. In contrast, options A, B, and D use deep learning architectures (CNN, RNN, LSTM) which are inherently more 'black box' and harder to interpret for regulatory purposes, even when paired with interpretability techniques like LIME or integrated gradients. The community discussion strongly supports C with 100% consensus, noting that boosted trees are the only highly interpretable model among the options and work well in production environments.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are developing a machine learning model on Vertex AI that must satisfy regulatory interpretability requirements. To maximize both accuracy and interpretability, you plan to use a combination of model architectures and modeling techniques. How should you build the model?
A
Use a convolutional neural network (CNN)-based deep learning model architecture, and use local interpretable model-agnostic explanations (LIME) for interpretability.
B
Use a recurrent neural network (RNN)-based deep learning model architecture, and use integrated gradients for interpretability.
C
Use a boosted decision tree-based model architecture, and use SHAP values for interpretability.
D
Use a long short-term memory (LSTM)-based model architecture, and use local interpretable model-agnostic explanations (LIME) for interpretability.
No comments yet.