
Answer-first summary for fast verification
Answer: Use feature attribution in Vertex AI to analyze model predictions and the impact of each feature on the model's predictions.
Option B is the correct answer because it directly addresses the core requirements: providing insights into the model's decision-making process and identifying fairness issues related to demographic bias. Feature attribution tools in Vertex AI (such as SHAP values) quantify how much each feature, including demographic ones, contributes to individual predictions, making bias transparent and measurable. This approach is supported by the community discussion, where the comment with upvotes specifically highlights feature attribution as ideal for identifying bias and fairness issues. Option A is less suitable because simply removing demographic features doesn't necessarily eliminate bias (other correlated features may proxy for demographics) and fails to provide insights into existing decision-making. Option C focuses on training-serving skew and data drift, which are related to model performance over time but not directly to fairness analysis. Option D is reactive and inefficient, as it relies on compiling unfair predictions first without a systematic method to identify bias across all predictions.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have developed a fraud detection model on Vertex AI for a large financial institution. While the model has high accuracy, stakeholders are concerned about potential bias related to customer demographics. You need to explain the model's decision-making process and identify any fairness issues. What is your recommended course of action?
A
Create feature groups using Vertex AI Feature Store to segregate customer demographic features and non-demographic features. Retrain the model using only non-demographic features.
B
Use feature attribution in Vertex AI to analyze model predictions and the impact of each feature on the model's predictions.
C
Enable Vertex AI Model Monitoring to detect training-serving skew. Configure an alert to send an email when the skew or drift for a modes feature exceeds a predefined threshold. Re-train the model by appending new data to existing raining data.
D
Compile a dataset of unfair predictions. Use Vertex AI Vector Search to identify similar data points in the model's predictions. Report these data points to the stakeholders.