
Answer-first summary for fast verification
Answer: Include fairness metrics for model evaluation., Modify the training data to mitigate bias.
## Detailed Explanation When developing and deploying a large language model (LLM) for document processing in a sensitive field like accounting, responsible AI practices are crucial to mitigate potential harms such as bias, discrimination, and unfair outcomes. The question asks for two actions that should be taken to ensure responsible development and deployment. ### Selected Options: **A. Include fairness metrics for model evaluation** - **Why this is optimal**: Fairness metrics are essential for assessing whether the LLM produces equitable outputs across different demographic groups, data types, or scenarios. In accounting, where financial decisions and document processing can impact clients significantly, evaluating fairness helps identify and mitigate discriminatory patterns. Metrics such as demographic parity, equal opportunity, or disparate impact analysis provide quantitative insights into model behavior, enabling continuous monitoring and validation of ethical standards. This aligns with AWS's responsible AI principles, which emphasize fairness as a core component of trustworthy AI systems. - **Why others are less suitable**: While other options like adjusting temperature (B) or applying prompt engineering (E) can influence model outputs, they do not directly address fairness evaluation. Temperature adjustments control randomness in responses, and prompt engineering shapes input instructions, but neither inherently measures or ensures fairness without explicit metrics. **C. Modify the training data to mitigate bias** - **Why this is optimal**: Bias in AI models often originates from biased training data. By modifying the training data—through techniques such as data balancing, augmentation, or debiasing—the firm can address bias at its source. This proactive step reduces the risk of the LLM learning and perpetuating harmful stereotypes or unfair patterns, which is critical in accounting where automated processing of financial documents must be impartial. AWS best practices highlight data quality and representativeness as foundational to responsible AI, making this a key action for deployment. - **Why others are less suitable**: Options like avoiding overfitting (D) focus on model generalization and performance but do not specifically target bias mitigation. Overfitting prevention is important for accuracy but does not guarantee fairness if the underlying data is biased. Similarly, prompt engineering (E) can help guide model behavior but may not fully compensate for biases embedded in the training data without direct data modifications. ### Summary: The combination of including fairness metrics (A) and modifying training data (C) provides a comprehensive approach: fairness metrics enable ongoing evaluation and detection of issues, while data modification addresses root causes of bias. This aligns with AWS's emphasis on both proactive and evaluative measures in responsible AI development, ensuring the LLM operates ethically and equitably in a high-stakes environment like accounting.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
Which two actions should the firm take to responsibly develop and deploy the LLM for automating document processing?
A
Include fairness metrics for model evaluation.
B
Adjust the temperature parameter of the model.
C
Modify the training data to mitigate bias.
D
Avoid overfitting on the training data.
E
Apply prompt engineering techniques.