
Answer-first summary for fast verification
Answer: Include fairness metrics for model evaluation., Modify the training data to mitigate bias.
## Explanation When developing and deploying Large Language Models (LLMs), addressing fairness and bias is crucial for ethical AI deployment. The two most appropriate actions from the options are: **A. Include fairness metrics for model evaluation** - This is essential to measure and monitor the model's performance across different demographic groups and ensure it doesn't produce biased outputs. **C. Modify the training data to mitigate bias** - Since bias often originates from training data, addressing bias at the data level is a fundamental approach to creating fairer models. ### Why other options are not the best choices: - **B. Adjust the temperature parameter of the model** - While temperature affects randomness in outputs, it doesn't directly address fairness or bias issues. - **D. Avoid overfitting on the training data** - This is a general machine learning best practice but not specifically about fairness in LLM deployment. - **E. Apply prompt engineering techniques** - While prompt engineering can help guide model behavior, it's a workaround rather than addressing the root causes of bias in the model itself. ### Best Practices for Fair LLM Development: 1. **Data curation and preprocessing** to identify and mitigate biases 2. **Fairness-aware model training** with appropriate loss functions 3. **Comprehensive evaluation** using fairness metrics across different groups 4. **Continuous monitoring** in production to detect emerging biases 5. **Transparent documentation** of model limitations and biases
Author: Ritesh Yadav
Ultimate access to all questions.
No comments yet.
What should the firm do when developing and deploying the LLM? (Select TWO.)
A
Include fairness metrics for model evaluation.
B
Adjust the temperature parameter of the model.
C
Modify the training data to mitigate bias.
D
Avoid overfitting on the training data.
E
Apply prompt engineering techniques.