
Ultimate access to all questions.
You are developing a natural language processing model to classify customer feedback as positive, negative, or neutral. During testing, you discover the model exhibits significant bias against specific demographic groups, skewing the analysis. According to Google's responsible AI practices, what steps should you take to address this issue?
A
Use Vertex AI's model evaluation lo assess bias in the model's predictions, and use post-processing to adjust outputs for identified demographic discrepancies.
B
Implement a more complex model architecture that can capture nuanced patterns in language to reduce bias.
C
Audit the training dataset to identify underrepresented groups and augment the dataset with additional samples before retraining the model.
D
Use Vertex Explainable AI to generate explanations and systematically adjust the predictions to address identified biases.