
Answer-first summary for fast verification
Answer: Audit the training dataset to identify underrepresented groups and augment the dataset with additional samples before retraining the model.
The correct answer is C because Google's Responsible AI practices emphasize addressing bias at its root cause by auditing and improving the training dataset. Option C directly addresses this by identifying underrepresented groups and augmenting the dataset, which aligns with Google's recommendation to mitigate bias through data augmentation. Option A is less suitable because post-processing adjustments do not address the underlying bias in the model and may introduce new issues. Option B is incorrect as a more complex model may amplify existing biases rather than reduce them. Option D is suboptimal because adjusting predictions manually does not resolve the fundamental data imbalance causing the bias. The community discussion supports C, noting that preprocessing mitigation (like data augmentation) is preferred over post-processing, and Google's best practices prioritize dataset fairness.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are developing a natural language processing model to classify customer feedback as positive, negative, or neutral. During testing, you discover the model exhibits significant bias against specific demographic groups, skewing the analysis. According to Google's responsible AI practices, what steps should you take to address this issue?
A
Use Vertex AI's model evaluation lo assess bias in the model's predictions, and use post-processing to adjust outputs for identified demographic discrepancies.
B
Implement a more complex model architecture that can capture nuanced patterns in language to reduce bias.
C
Audit the training dataset to identify underrepresented groups and augment the dataset with additional samples before retraining the model.
D
Use Vertex Explainable AI to generate explanations and systematically adjust the predictions to address identified biases.
No comments yet.