
Answer-first summary for fast verification
Answer: Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines.
The correct answer is C: 'Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines.' This approach leverages your already-developed, well-performing model without needing to rebuild it using AutoML or BigQuery ML, which might require significant code changes. Vertex AI offers Explainable AI integration with custom models through feature-based attribution methods like sampled Shapley, providing explanations for each prediction without major modifications to your model code. Sampled Shapley is a robust method for explaining model predictions, and using input baselines helps improve the interpretability of explanations, especially for features with large ranges.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are a machine learning engineer working for a bank, and you have developed a custom model to predict whether a loan application should be flagged for human review. The input features for this model are stored in a BigQuery table. The model has demonstrated strong performance in testing, and you are preparing to deploy it into production. However, due to stringent compliance requirements, the model must provide explanations for each of its predictions to ensure transparency and accountability. Your goal is to integrate this functionality into your model code with minimal effort while ensuring the explanations are as accurate as possible. What should you do?
A
Create an AutoML tabular model by using the BigQuery data with integrated Vertex Explainable AI.
B
Create a BigQuery ML deep neural network model and use the ML.EXPLAIN_PREDICT method with the num_integral_steps parameter.
C
Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines.
D
Update the custom serving container to include sampled Shapley-based explanations in the prediction outputs.
No comments yet.